[ { "instruction": "Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2", "input": "", "output": "

What is this warning about?

\n

Modern CPUs provide a lot of low-level instructions, besides the usual arithmetic and logic, known as extensions, e.g. SSE2, SSE4, AVX, etc. From the Wikipedia:

\n
\n

Advanced Vector Extensions (AVX) are extensions to the x86 instruction\nset architecture for microprocessors from Intel and AMD proposed by\nIntel in March 2008 and first supported by Intel with the Sandy\nBridge processor shipping in Q1 2011 and later on by AMD with the\nBulldozer processor shipping in Q3 2011. AVX provides new features,\nnew instructions and a new coding scheme.

\n
\n

In particular, AVX introduces fused multiply-accumulate (FMA) operations, which speed up linear algebra computation, namely dot-product, matrix multiply, convolution, etc. Almost every machine-learning training involves a great deal of these operations, hence will be faster on a CPU that supports AVX and FMA (up to 300%). The warning states that your CPU does support AVX (hooray!).

\n

I'd like to stress here: it's all about CPU only.

\n

Why isn't it used then?

\n

Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible. Another argument is that even with these extensions CPU is a lot slower than a GPU, and it's expected for medium- and large-scale machine-learning training to be performed on a GPU.

\n

What should you do?

\n

If you have a GPU, you shouldn't care about AVX support, because most expensive ops will be dispatched on a GPU device (unless explicitly set not to). In this case, you can simply ignore this warning by

\n
# Just disables the warning, doesn't take advantage of AVX/FMA to run faster\nimport os\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'\n
\n

... or by setting export TF_CPP_MIN_LOG_LEVEL=2 if you're on Unix. Tensorflow is working fine anyway, but you won't see these annoying warnings.

\n
\n

If you don't have a GPU and want to utilize CPU as much as possible, you should build tensorflow from the source optimized for your CPU with AVX, AVX2, and FMA enabled if your CPU supports them. It's been discussed in this question and also this GitHub issue. Tensorflow uses an ad-hoc build system called bazel and building it is not that trivial, but is certainly doable. After this, not only will the warning disappear, tensorflow performance should also improve.

\n", "system": "" }, { "instruction": "TensorFlow not found using pip", "input": "", "output": "

I found this to finally work.

\n\n
python3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.12.0-py3-none-any.whl\n
\n\n

Edit 1: This was tested on Windows (8, 8.1, 10), Mac and Linux. Change python3 to python according to your configuration. Change py3 to py2 in the url if you are using Python 2.x.

\n\n

Edit 2: A list of different versions if someone needs: https://storage.googleapis.com/tensorflow

\n\n

Edit 3: A list of urls for the available wheel packages is available here:\nhttps://www.tensorflow.org/install/pip#package-location

\n", "system": "" }, { "instruction": "How to save/restore a model after training?", "input": "", "output": "

In (and after) Tensorflow version 0.11:

\n

Save the model:

\n
import tensorflow as tf\n\n#Prepare to feed input, i.e. feed_dict and placeholders\nw1 = tf.placeholder("float", name="w1")\nw2 = tf.placeholder("float", name="w2")\nb1= tf.Variable(2.0,name="bias")\nfeed_dict ={w1:4,w2:8}\n\n#Define a test operation that we will restore\nw3 = tf.add(w1,w2)\nw4 = tf.multiply(w3,b1,name="op_to_restore")\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())\n\n#Create a saver object which will save all the variables\nsaver = tf.train.Saver()\n\n#Run the operation by feeding input\nprint sess.run(w4,feed_dict)\n#Prints 24 which is sum of (w1+w2)*b1 \n\n#Now, save the graph\nsaver.save(sess, 'my_test_model',global_step=1000)\n
\n

Restore the model:

\n
import tensorflow as tf\n\nsess=tf.Session()    \n#First let's load meta graph and restore weights\nsaver = tf.train.import_meta_graph('my_test_model-1000.meta')\nsaver.restore(sess,tf.train.latest_checkpoint('./'))\n\n\n# Access saved Variables directly\nprint(sess.run('bias:0'))\n# This will print 2, which is the value of bias that we saved\n\n\n# Now, let's access and create placeholders variables and\n# create feed-dict to feed new data\n\ngraph = tf.get_default_graph()\nw1 = graph.get_tensor_by_name("w1:0")\nw2 = graph.get_tensor_by_name("w2:0")\nfeed_dict ={w1:13.0,w2:17.0}\n\n#Now, access the op that you want to run. \nop_to_restore = graph.get_tensor_by_name("op_to_restore:0")\n\nprint sess.run(op_to_restore,feed_dict)\n#This will print 60 which is calculated \n
\n

This and some more advanced use-cases have been explained very well here.

\n

A quick complete tutorial to save and restore Tensorflow models

\n", "system": "" }, { "instruction": "What are logits? What is the difference between softmax and softmax_cross_entropy_with_logits?", "input": "", "output": "

The softmax+logits simply means that the function operates on the unscaled output of earlier layers and that the relative scale to understand the units is linear. It means, in particular, the sum of the inputs may not equal 1, that the values are not probabilities (you might have an input of 5). Internally, it first applies softmax to the unscaled output, and then and then computes the cross entropy of those values vs. what they "should" be as defined by the labels.

\n

tf.nn.softmax produces the result of applying the softmax function to an input tensor. The softmax "squishes" the inputs so that sum(input) = 1, and it does the mapping by interpreting the inputs as log-probabilities (logits) and then converting them back into raw probabilities between 0 and 1. The shape of output of a softmax is the same as the input:

\n
a = tf.constant(np.array([[.1, .3, .5, .9]]))\nprint s.run(tf.nn.softmax(a))\n[[ 0.16838508  0.205666    0.25120102  0.37474789]]\n
\n

See this answer for more about why softmax is used extensively in DNNs.

\n

tf.nn.softmax_cross_entropy_with_logits combines the softmax step with the calculation of the cross-entropy loss after applying the softmax function, but it does it all together in a more mathematically careful way. It's similar to the result of:

\n
sm = tf.nn.softmax(x)\nce = cross_entropy(sm)\n
\n

The cross entropy is a summary metric: it sums across the elements. The output of tf.nn.softmax_cross_entropy_with_logits on a shape [2,5] tensor is of shape [2,1] (the first dimension is treated as the batch).

\n

If you want to do optimization to minimize the cross entropy AND you're softmaxing after your last layer, you should use tf.nn.softmax_cross_entropy_with_logits instead of doing it yourself, because it covers numerically unstable corner cases in the mathematically right way. Otherwise, you'll end up hacking it by adding little epsilons here and there.

\n

Edited 2016-02-07:\nIf you have single-class labels, where an object can only belong to one class, you might now consider using tf.nn.sparse_softmax_cross_entropy_with_logits so that you don't have to convert your labels to a dense one-hot array. This function was added after release 0.6.0.

\n", "system": "" }, { "instruction": "What is the meaning of the word logits in TensorFlow?", "input": "", "output": "

Logits is an overloaded term which can mean many different things:

\n\n
\n\n

In Math, Logit is a function that maps probabilities ([0, 1]) to R ((-inf, inf))

\n\n

\"enter

\n\n

Probability of 0.5 corresponds to a logit of 0. Negative logit correspond to probabilities less than 0.5, positive to > 0.5.

\n\n

In ML, it can be

\n\n
\n

the vector of raw (non-normalized) predictions that a classification\n model generates, which is ordinarily then passed to a normalization\n function. If the model is solving a multi-class classification\n problem, logits typically become an input to the softmax function. The\n softmax function then generates a vector of (normalized) probabilities\n with one value for each possible class.

\n
\n\n

Logits also sometimes refer to the element-wise inverse of the sigmoid function.

\n", "system": "" }, { "instruction": "How to tell if tensorflow is using gpu acceleration from inside python shell?", "input": "", "output": "

No, I don't think "open CUDA library" is enough to tell, because different nodes of the graph may be on different devices.

\n

When using tensorflow2:

\n
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))\n
\n

For tensorflow1, to find out which device is used, you can enable log device placement like this:

\n
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))\n
\n

Check your console for this type of output.

\n

\"\"

\n", "system": "" }, { "instruction": "What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?", "input": "", "output": "

If you like ascii art:

\n\n\n\n

In this example:

\n\n\n\n

Notes:

\n\n\n\n

Edit:

\n\n

About the name:

\n\n\n", "system": "" }, { "instruction": "How to find which version of TensorFlow is installed in my system?", "input": "", "output": "

This depends on how you installed TensorFlow. I am going to use the same headings used by TensorFlow's installation instructions to structure this answer.

\n\n
\n\n

Pip installation

\n\n

Run:

\n\n
python -c 'import tensorflow as tf; print(tf.__version__)'  # for Python 2\npython3 -c 'import tensorflow as tf; print(tf.__version__)'  # for Python 3\n
\n\n

Note that python is symlinked to /usr/bin/python3 in some Linux distributions, so use python instead of python3 in these cases.

\n\n

pip list | grep tensorflow for Python 2 or pip3 list | grep tensorflow for Python 3 will also show the version of Tensorflow installed.

\n\n
\n\n

Virtualenv installation

\n\n

Run:

\n\n
python -c 'import tensorflow as tf; print(tf.__version__)'  # for both Python 2 and Python 3\n
\n\n

pip list | grep tensorflow will also show the version of Tensorflow installed.

\n\n

For example, I have installed TensorFlow 0.9.0 in a virtualenv for Python 3. So, I get:

\n\n
$ python -c 'import tensorflow as tf; print(tf.__version__)'\n0.9.0\n\n$ pip list | grep tensorflow\ntensorflow (0.9.0)\n
\n", "system": "" }, { "instruction": "Could not find a version that satisfies the requirement tensorflow", "input": "", "output": "

The latest requirements for running TensorFlow are documented in the installation documentation.

\n\n

So, if you're using an out-of-range version of Python (older or newer) or a 32-bit version, then you'll need to use a different version.

\n", "system": "" }, { "instruction": "How to prevent tensorflow from allocating the totality of a GPU memory?", "input": "", "output": "

You can set the fraction of GPU memory to be allocated when you construct a tf.Session by passing a tf.GPUOptions as part of the optional config argument:

\n\n
# Assume that you have 12GB of GPU memory and want to allocate ~4GB:\ngpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)\n\nsess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))\n
\n\n

The per_process_gpu_memory_fraction acts as a hard upper bound on the amount of GPU memory that will be used by the process on each GPU on the same machine. Currently, this fraction is applied uniformly to all of the GPUs on the same machine; there is no way to set this on a per-GPU basis.

\n", "system": "" }, { "instruction": "Disable Tensorflow debugging information", "input": "", "output": "

You can disable all debugging logs using os.environ :

\n\n
import os\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' \nimport tensorflow as tf\n
\n\n

Tested on tf 0.12 and 1.0

\n\n

In details,

\n\n
0 = all messages are logged (default behavior)\n1 = INFO messages are not printed\n2 = INFO and WARNING messages are not printed\n3 = INFO, WARNING, and ERROR messages are not printed\n
\n", "system": "" }, { "instruction": "Convert a tensor to numpy array in Tensorflow?", "input": "", "output": "

TensorFlow 2.x

\n

Eager Execution is enabled by default, so just call .numpy() on the Tensor object.

\n
import tensorflow as tf\n\na = tf.constant([[1, 2], [3, 4]])                 \nb = tf.add(a, 1)\n\na.numpy()\n# array([[1, 2],\n#        [3, 4]], dtype=int32)\n\nb.numpy()\n# array([[2, 3],\n#        [4, 5]], dtype=int32)\n\ntf.multiply(a, b).numpy()\n# array([[ 2,  6],\n#        [12, 20]], dtype=int32)\n
\n

See NumPy Compatibility for more. It is worth noting (from the docs),

\n
\n

Numpy array may share a memory with the Tensor object. Any changes to one may be reflected in the other.

\n
\n

Bold emphasis mine. A copy may or may not be returned, and this is an implementation detail based on whether the data is in CPU or GPU (in the latter case, a copy has to be made from GPU to host memory).

\n

But why am I getting the AttributeError: 'Tensor' object has no attribute 'numpy'?.
\nA lot of folks have commented about this issue, there are a couple of possible reasons:

\n\n
\n

If Eager Execution is disabled, you can build a graph and then run it through tf.compat.v1.Session:

\n
a = tf.constant([[1, 2], [3, 4]])                 \nb = tf.add(a, 1)\nout = tf.multiply(a, b)\n\nout.eval(session=tf.compat.v1.Session())    \n# array([[ 2,  6],\n#        [12, 20]], dtype=int32)
\n

See also TF 2.0 Symbols Map for a mapping of the old API to the new one.

\n", "system": "" }, { "instruction": "Which TensorFlow and CUDA version combinations are compatible?", "input": "", "output": "

TL;DR) See this table: https://www.tensorflow.org/install/source#gpu

\n

Generally:

\n

Check the CUDA version:

\n
cat /usr/local/cuda/version.txt\n
\n

and cuDNN version:

\n
grep CUDNN_MAJOR -A 2 /usr/local/cuda/include/cudnn.h\n
\n

and install a combination as given below in the images or here.

\n

The following images and the link provide an overview of the officially supported/tested combinations of CUDA and TensorFlow on Linux, macOS and Windows:

\n

Minor configurations:

\n

Since the given specifications below in some cases might be too broad, here is one specific configuration that works:

\n\n

The corresponding cudnn can be downloaded here.

\n

Tested build configurations

\n

Please refer to https://www.tensorflow.org/install/source#gpu for a up-to-date compatibility chart (for official TF wheels).

\n

(figures updated May 20, 2020)

\n

Linux GPU

\n

\"enter

\n

Linux CPU

\n

\"enter

\n

macOS GPU

\n

\"enter

\n

macOS CPU

\n

\"enter

\n

Windows GPU

\n

\"enter

\n

Windows CPU

\n

\"enter

\n

Updated as of Dec 5 2020: For the updated information please refer Link for Linux and Link for Windows.

\n", "system": "" }, { "instruction": "How to compile Tensorflow with SSE4.2 and AVX instructions?", "input": "", "output": "

I just ran into this same problem, it seems like Yaroslav Bulatov's suggestion doesn't cover SSE4.2 support, adding --copt=-msse4.2 would suffice. In the end, I successfully built with

\n\n
bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=cuda -k //tensorflow/tools/pip_package:build_pip_package\n
\n\n

without getting any warning or errors.

\n\n

Probably the best choice for any system is:

\n\n
bazel build -c opt --copt=-march=native --copt=-mfpmath=both --config=cuda -k //tensorflow/tools/pip_package:build_pip_package\n
\n\n

(Update: the build scripts may be eating -march=native, possibly because it contains an =.)

\n\n

-mfpmath=both only works with gcc, not clang. -mfpmath=sse is probably just as good, if not better, and is the default for x86-64. 32-bit builds default to -mfpmath=387, so changing that will help for 32-bit. (But if you want high-performance for number crunching, you should build 64-bit binaries.)

\n\n

I'm not sure what TensorFlow's default for -O2 or -O3 is. gcc -O3 enables full optimization including auto-vectorization, but that sometimes can make code slower.

\n\n
\n\n

What this does: --copt for bazel build passes an option directly to gcc for compiling C and C++ files (but not linking, so you need a different option for cross-file link-time-optimization)

\n\n

x86-64 gcc defaults to using only SSE2 or older SIMD instructions, so you can run the binaries on any x86-64 system. (See https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html). That's not what you want. You want to make a binary that takes advantage of all the instructions your CPU can run, because you're only running this binary on the system where you built it.

\n\n

-march=native enables all the options your CPU supports, so it makes -mavx512f -mavx2 -mavx -mfma -msse4.2 redundant. (Also, -mavx2 already enables -mavx and -msse4.2, so Yaroslav's command should have been fine). Also if you're using a CPU that doesn't support one of these options (like FMA), using -mfma would make a binary that faults with illegal instructions.

\n\n

TensorFlow's ./configure defaults to enabling -march=native, so using that should avoid needing to specify compiler options manually.

\n\n

-march=native enables -mtune=native, so it optimizes for your CPU for things like which sequence of AVX instructions is best for unaligned loads.

\n\n

This all applies to gcc, clang, or ICC. (For ICC, you can use -xHOST instead of -march=native.)

\n", "system": "" }, { "instruction": "What's the difference between tf.placeholder and tf.Variable?", "input": "", "output": "

In short, you use tf.Variable for trainable variables such as weights (W) and biases (B) for your model.

\n\n
weights = tf.Variable(\n    tf.truncated_normal([IMAGE_PIXELS, hidden1_units],\n                    stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))), name='weights')\n\nbiases = tf.Variable(tf.zeros([hidden1_units]), name='biases')\n
\n\n

tf.placeholder is used to feed actual training examples.

\n\n
images_placeholder = tf.placeholder(tf.float32, shape=(batch_size, IMAGE_PIXELS))\nlabels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))\n
\n\n

This is how you feed the training examples during the training:

\n\n
for step in xrange(FLAGS.max_steps):\n    feed_dict = {\n       images_placeholder: images_feed,\n       labels_placeholder: labels_feed,\n     }\n    _, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)\n
\n\n

Your tf.variables will be trained (modified) as the result of this training.

\n\n

See more at https://www.tensorflow.org/versions/r0.7/tutorials/mnist/tf/index.html. (Examples are taken from the web page.)

\n", "system": "" }, { "instruction": "How to print the value of a Tensor object in TensorFlow?", "input": "", "output": "

The easiest[A] way to evaluate the actual value of a Tensor object is to pass it to the Session.run() method, or call Tensor.eval() when you have a default session (i.e. in a with tf.Session(): block, or see below). In general[B], you cannot print the value of a tensor without running some code in a session.

\n\n

If you are experimenting with the programming model, and want an easy way to evaluate tensors, the tf.InteractiveSession lets you open a session at the start of your program, and then use that session for all Tensor.eval() (and Operation.run()) calls. This can be easier in an interactive setting, such as the shell or an IPython notebook, when it's tedious to pass around a Session object everywhere. For example, the following works in a Jupyter notebook:

\n\n
with tf.Session() as sess:  print(product.eval()) \n
\n\n

This might seem silly for such a small expression, but one of the key ideas in Tensorflow 1.x is deferred execution: it's very cheap to build a large and complex expression, and when you want to evaluate it, the back-end (to which you connect with a Session) is able to schedule its execution more efficiently (e.g. executing independent parts in parallel and using GPUs).

\n\n
\n\n

[A]: To print the value of a tensor without returning it to your Python program, you can use the tf.print() operator, as Andrzej suggests in another answer. According to the official documentation:

\n\n
\n

To make sure the operator runs, users need to pass the produced op to tf.compat.v1.Session's run method, or to use the op as a control dependency for executed ops by specifying with tf.compat.v1.control_dependencies([print_op]), which is printed to standard output.

\n
\n\n

Also note that:

\n\n
\n

In Jupyter notebooks and colabs, tf.print prints to the notebook cell outputs. It will not write to the notebook kernel's console logs.

\n
\n\n

[B]: You might be able to use the tf.get_static_value() function to get the constant value of the given tensor if its value is efficiently calculable.

\n", "system": "" }, { "instruction": "Could not install packages due to an EnvironmentError: [WinError 5] Access is denied:", "input": "", "output": "

Just type the command you want execute with the user permission, if you don't want to change the permission:

\n\n
pip3 install --upgrade tensorflow-gpu --user\n
\n", "system": "" }, { "instruction": "What's the difference of name scope and a variable scope in tensorflow?", "input": "", "output": "

Let's begin by a short introduction to variable sharing. It is a mechanism in TensorFlow that allows for sharing variables accessed in different parts of the code without passing references to the variable around.

\n\n

The method tf.get_variable can be used with the name of the variable as the argument to either create a new variable with such name or retrieve the one that was created before. This is different from using the tf.Variable constructor which will create a new variable every time it is called (and potentially add a suffix to the variable name if a variable with such name already exists).

\n\n

It is for the purpose of the variable sharing mechanism that a separate type of scope (variable scope) was introduced.

\n\n

As a result, we end up having two different types of scopes:

\n\n\n\n

Both scopes have the same effect on all operations as well as variables created using tf.Variable, i.e., the scope will be added as a prefix to the operation or variable name.

\n\n

However, name scope is ignored by tf.get_variable. We can see that in the following example:

\n\n
with tf.name_scope(\"my_scope\"):\n    v1 = tf.get_variable(\"var1\", [1], dtype=tf.float32)\n    v2 = tf.Variable(1, name=\"var2\", dtype=tf.float32)\n    a = tf.add(v1, v2)\n\nprint(v1.name)  # var1:0\nprint(v2.name)  # my_scope/var2:0\nprint(a.name)   # my_scope/Add:0\n
\n\n

The only way to place a variable accessed using tf.get_variable in a scope is to use a variable scope, as in the following example:

\n\n
with tf.variable_scope(\"my_scope\"):\n    v1 = tf.get_variable(\"var1\", [1], dtype=tf.float32)\n    v2 = tf.Variable(1, name=\"var2\", dtype=tf.float32)\n    a = tf.add(v1, v2)\n\nprint(v1.name)  # my_scope/var1:0\nprint(v2.name)  # my_scope/var2:0\nprint(a.name)   # my_scope/Add:0\n
\n\n

This allows us to easily share variables across different parts of the program, even within different name scopes:

\n\n
with tf.name_scope(\"foo\"):\n    with tf.variable_scope(\"var_scope\"):\n        v = tf.get_variable(\"var\", [1])\nwith tf.name_scope(\"bar\"):\n    with tf.variable_scope(\"var_scope\", reuse=True):\n        v1 = tf.get_variable(\"var\", [1])\nassert v1 == v\nprint(v.name)   # var_scope/var:0\nprint(v1.name)  # var_scope/var:0\n
\n\n
\n\n

UPDATE

\n\n

As of version r0.11, op_scope and variable_op_scope are both deprecated and replaced by name_scope and variable_scope.

\n", "system": "" }, { "instruction": "Ordering of batch normalization and dropout?", "input": "", "output": "

In the Ioffe and Szegedy 2015, the authors state that \"we would like to ensure that for any parameter values, the network always produces activations with the desired distribution\". So the Batch Normalization Layer is actually inserted right after a Conv Layer/Fully Connected Layer, but before feeding into ReLu (or any other kinds of) activation. See this video at around time 53 min for more details.

\n\n

As far as dropout goes, I believe dropout is applied after activation layer. In the dropout paper figure 3b, the dropout factor/probability matrix r(l) for hidden layer l is applied to it on y(l), where y(l) is the result after applying activation function f.

\n\n

So in summary, the order of using batch normalization and dropout is:

\n\n

-> CONV/FC -> BatchNorm -> ReLu(or other activation) -> Dropout -> CONV/FC ->

\n", "system": "" }, { "instruction": "How to get current available GPUs in tensorflow?", "input": "", "output": "

There is an undocumented method called device_lib.list_local_devices() that enables you to list the devices available in the local process. (N.B. As an undocumented method, this is subject to backwards incompatible changes.) The function returns a list of DeviceAttributes protocol buffer objects. You can extract a list of string device names for the GPU devices as follows:

\n\n
from tensorflow.python.client import device_lib\n\ndef get_available_gpus():\n    local_device_protos = device_lib.list_local_devices()\n    return [x.name for x in local_device_protos if x.device_type == 'GPU']\n
\n\n

Note that (at least up to TensorFlow 1.4), calling device_lib.list_local_devices() will run some initialization code that, by default, will allocate all of the GPU memory on all of the devices (GitHub issue). To avoid this, first create a session with an explicitly small per_process_gpu_fraction, or allow_growth=True, to prevent all of the memory being allocated. See this question for more details.

\n", "system": "" }, { "instruction": "Tensorflow 2.0 - AttributeError: module 'tensorflow' has no attribute 'Session'", "input": "", "output": "

According to TF 1:1 Symbols Map, in TF 2.0 you should use tf.compat.v1.Session() instead of tf.Session()

\n\n

https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0

\n\n

To get TF 1.x like behaviour in TF 2.0 one can run

\n\n
import tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\n
\n\n

but then one cannot benefit of many improvements made in TF 2.0. For more details please refer to the migration guide \nhttps://www.tensorflow.org/guide/migrate

\n", "system": "" }, { "instruction": "Keras, How to get the output of each layer?", "input": "", "output": "

You can easily get the outputs of any layer by using: model.layers[index].output

\n\n

For all layers use this:

\n\n
from keras import backend as K\n\ninp = model.input                                           # input placeholder\noutputs = [layer.output for layer in model.layers]          # all layer outputs\nfunctors = [K.function([inp, K.learning_phase()], [out]) for out in outputs]    # evaluation functions\n\n# Testing\ntest = np.random.random(input_shape)[np.newaxis,...]\nlayer_outs = [func([test, 1.]) for func in functors]\nprint layer_outs\n
\n\n

Note: To simulate Dropout use learning_phase as 1. in layer_outs otherwise use 0.

\n\n

Edit: (based on comments)

\n\n

K.function creates theano/tensorflow tensor functions which is later used to get the output from the symbolic graph given the input.

\n\n

Now K.learning_phase() is required as an input as many Keras layers like Dropout/Batchnomalization depend on it to change behavior during training and test time.

\n\n

So if you remove the dropout layer in your code you can simply use:

\n\n
from keras import backend as K\n\ninp = model.input                                           # input placeholder\noutputs = [layer.output for layer in model.layers]          # all layer outputs\nfunctors = [K.function([inp], [out]) for out in outputs]    # evaluation functions\n\n# Testing\ntest = np.random.random(input_shape)[np.newaxis,...]\nlayer_outs = [func([test]) for func in functors]\nprint layer_outs\n
\n\n

Edit 2: More optimized

\n\n

I just realized that the previous answer is not that optimized as for each function evaluation the data will be transferred CPU->GPU memory and also the tensor calculations needs to be done for the lower layers over-n-over.

\n\n

Instead this is a much better way as you don't need multiple functions but a single function giving you the list of all outputs:

\n\n
from keras import backend as K\n\ninp = model.input                                           # input placeholder\noutputs = [layer.output for layer in model.layers]          # all layer outputs\nfunctor = K.function([inp, K.learning_phase()], outputs )   # evaluation function\n\n# Testing\ntest = np.random.random(input_shape)[np.newaxis,...]\nlayer_outs = functor([test, 1.])\nprint layer_outs\n
\n", "system": "" }, { "instruction": "In TensorFlow, what is the difference between Session.run() and Tensor.eval()?", "input": "", "output": "

If you have a Tensor t, calling t.eval() is equivalent to calling tf.get_default_session().run(t).

\n\n

You can make a session the default as follows:

\n\n
t = tf.constant(42.0)\nsess = tf.Session()\nwith sess.as_default():   # or `with sess:` to close on exit\n    assert sess is tf.get_default_session()\n    assert t.eval() == sess.run(t)\n
\n\n

The most important difference is that you can use sess.run() to fetch the values of many tensors in the same step:

\n\n
t = tf.constant(42.0)\nu = tf.constant(37.0)\ntu = tf.mul(t, u)\nut = tf.mul(u, t)\nwith sess.as_default():\n   tu.eval()  # runs one step\n   ut.eval()  # runs one step\n   sess.run([tu, ut])  # evaluates both tensors in a single step\n
\n\n

Note that each call to eval and run will execute the whole graph from scratch. To cache the result of a computation, assign it to a tf.Variable.

\n", "system": "" }, { "instruction": "How can I run Tensorboard on a remote server?", "input": "", "output": "

Here is what I do to avoid the issues of making the remote server accept your local external IP:

\n\n\n\n

What it does is that everything on the port 6006 of the server (in 127.0.0.1:6006) will be forwarded to my machine on the port 16006.

\n\n
\n\n\n", "system": "" }, { "instruction": "How to build and use Google TensorFlow C++ api", "input": "", "output": "

To get started, you should download the source code from Github, by following the instructions here (you'll need Bazel and a recent version of GCC).

\n\n

The C++ API (and the backend of the system) is in tensorflow/core. Right now, only the C++ Session interface, and the C API are being supported. You can use either of these to execute TensorFlow graphs that have been built using the Python API and serialized to a GraphDef protocol buffer. There is also an experimental feature for building graphs in C++, but this is currently not quite as full-featured as the Python API (e.g. no support for auto-differentiation at present). You can see an example program that builds a small graph in C++ here.

\n\n

The second part of the C++ API is the API for adding a new OpKernel, which is the class containing implementations of numerical kernels for CPU and GPU. There are numerous examples of how to build these in tensorflow/core/kernels, as well as a tutorial for adding a new op in C++.

\n", "system": "" }, { "instruction": "TypeError: Descriptors cannot not be created directly", "input": "", "output": "

Sometimes the protobuf package might be installed without your involvement. For this, you have two solutions to apply. Try one of the below solutions and it should work.

\n

Solution 1:

\n

You can downgrade the protobuf plugin,

\n
pip install protobuf==3.20.*\n
\n

Or you can add it to the requirements.txt file as the last package. Because this will override the previously installed protobuf package.

\n
...\nprotobuf==3.20.*\n
\n

Solution 2:

\n

You can set the following environment variable.

\n
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python\n
\n

according to the error description, this might impact your program performance.

\n
\n

but this will use pure-Python parsing and will be much slower

\n
\n
\n

References:

\n\n", "system": "" }, { "instruction": "What is the difference between steps and epochs in TensorFlow?", "input": "", "output": "

A training step is one gradient update. In one step batch_size examples are processed.

\n

An epoch consists of one full cycle through the training data. This is usually many steps. As an example, if you have 2,000 images and use a batch size of 10 an epoch consists of:

\n
2,000 images / (10 images / step) = 200 steps.\n
\n

If you choose your training image randomly (and independently) in each step, you normally do not call it epoch. [This is where my answer differs from the previous one. Also see my comment.]

\n", "system": "" }, { "instruction": "How to run Tensorflow on CPU", "input": "", "output": "

You can also set the environment variable to

\n\n
CUDA_VISIBLE_DEVICES=\"\"\n
\n\n

without having to modify the source code.

\n", "system": "" }, { "instruction": "Why is TensorFlow 2 much slower than TensorFlow 1?", "input": "", "output": "

UPDATE 8/1730/2020: TF 2.3 has finally done it: all cases run as fast, or notably faster, than any previous version.

\n

Further, my previous update was unfair to TF; my GPU was to blame, has been overheating lately. If you see a rising stem plot of iteration times, it's a reliable symptom. Lastly, see a dev's note on Eager vs Graph.

\n

This might be my last update on this answer. The true stats on your model's speed can only be found by you, on your device.

\n
\n

UPDATE 5/19/2020: TF 2.2, using same tests: only a minor improvement in Eager speed. Plots for Large-Large Numpy train_on_batch case below, x-axis is successive fit iterations; my GPU isn't near its full capacity, so doubt it's throttling, but iterations do get slower over time.

\n

\"enter

\n

Per above, Graph and Eager are 1.56x and 1.97x slower than their TF1 counterparts, respectively. Unsure I'll debug this further, as I'm considering switching to Pytorch per TensorFlow's poor support for custom / low-level functionality. I did, however, open an Issue to get devs' feedback.

\n
\n

UPDATE 2/18/2020: I've benched 2.1 and 2.1-nightly; the results are mixed. All but one configs (model & data size) are as fast as or much faster than the best of TF2 & TF1. The one that's slower, and slower dramatically, is Large-Large - esp. in Graph execution (1.6x to 2.5x slower).

\n

Furthermore, there are extreme reproducibility differences between Graph and Eager for a large model I tested - one not explainable via randomness/compute-parallelism. I can't currently present reproducible code for these claims per time constraints, so instead I strongly recommend testing this for your own models.

\n

Haven't opened a Git issue on these yet, but I did comment on the original - no response yet. I'll update the answer(s) once progress is made.

\n
\n

VERDICT: it isn't, IF you know what you're doing. But if you don't, it could cost you, lots - by a few GPU upgrades on average, and by multiple GPUs worst-case.

\n
\n

THIS ANSWER: aims to provide a high-level description of the issue, as well as guidelines for how to decide on the training configuration specific to your needs. For a detailed, low-level description, which includes all benchmarking results + code used, see my other answer.

\n

I'll be updating my answer(s) w/ more info if I learn any - can bookmark / "star" this question for reference.

\n
\n

ISSUE SUMMARY: as confirmed by a TensorFlow developer, Q. Scott Zhu, TF2 focused development on Eager execution & tight integration w/ Keras, which involved sweeping changes in TF source - including at graph-level. Benefits: greatly expanded processing, distribution, debug, and deployment capabilities. The cost of some of these, however, is speed.

\n

The matter, however, is fairly more complex. It isn't just TF1 vs. TF2 - factors yielding significant differences in train speed include:

\n
    \n
  1. TF2 vs. TF1
  2. \n
  3. Eager vs. Graph mode
  4. \n
  5. keras vs. tf.keras
  6. \n
  7. numpy vs. tf.data.Dataset vs. ...
  8. \n
  9. train_on_batch() vs. fit()
  10. \n
  11. GPU vs. CPU
  12. \n
  13. model(x) vs. model.predict(x) vs. ...
  14. \n
\n

Unfortunately, almost none of the above are independent of the other, and each can at least double execution time relative to another. Fortunately, you can determine what'll work best systematically, and with a few shortcuts - as I'll be showing.

\n
\n

WHAT SHOULD I DO? Currently, the only way is - experiment for your specific model, data, and hardware. No single configuration will always work best - but there are do's and don't's to simplify your search:

\n

>> DO:

\n\n

>> DON'T:

\n\n

Refer to code at bottom of my other answer for an example benchmarking setup. The list above is based mainly on the "BENCHMARKS" tables in the other answer.

\n
\n

LIMITATIONS of the above DO's & DON'T's:

\n\n
\n

Why did TF2 sacrifice the most practical quality, speed, for eager execution? It hasn't, clearly - graph is still available. But if the question is "why eager at all":

\n\n
\n

HOW TO ENABLE/DISABLE EAGER?

\n
tf.enable_eager_execution()  # TF1; must be done before any model/tensor creation\ntf.compat.v1.disable_eager_execution() # TF2; above holds\n
\n

Misleading in TF2; see here.

\n
\n

ADDITIONAL INFO:

\n\n
\n

REQUESTS TO TENSORFLOW DEVS:

\n
    \n
  1. Please fix train_on_batch(), and the performance aspect of calling fit() iteratively; custom train loops are important to many, especially to me.
  2. \n
  3. Add documentation / docstring mention of these performance differences for users' knowledge.
  4. \n
  5. Improve general execution speed to keep peeps from hopping to Pytorch.
  6. \n
\n
\n

ACKNOWLEDGEMENTS: Thanks to

\n\n
\n

UPDATES:

\n\n", "system": "" }, { "instruction": "What is the role of "Flatten" in Keras?", "input": "", "output": "

If you read the Keras documentation entry for Dense, you will see that this call:

\n
Dense(16, input_shape=(5,3))\n
\n

would result in a Dense network with 3 inputs and 16 outputs which would be applied independently for each of 5 steps. So, if D(x) transforms 3 dimensional vector to 16-d vector, what you'll get as output from your layer would be a sequence of vectors: [D(x[0,:]), D(x[1,:]),..., D(x[4,:])] with shape (5, 16). In order to have the behavior you specify you may first Flatten your input to a 15-d vector and then apply Dense:

\n
model = Sequential()\nmodel.add(Flatten(input_shape=(3, 2)))\nmodel.add(Dense(16))\nmodel.add(Activation('relu'))\nmodel.add(Dense(4))\nmodel.compile(loss='mean_squared_error', optimizer='SGD')\n
\n

EDIT:\nAs some people struggled to understand - here you have an explaining image:

\n

\"enter

\n", "system": "" }, { "instruction": "TensorFlow, why was python the chosen language?", "input": "", "output": "

The most important thing to realize about TensorFlow is that, for the most part, the core is not written in Python: It's written in a combination of highly-optimized C++ and CUDA (Nvidia's language for programming GPUs). Much of that happens, in turn, by using Eigen (a high-performance C++ and CUDA numerical library) and NVidia's cuDNN (a very optimized DNN library for NVidia GPUs, for functions such as convolutions).

\n\n

The model for TensorFlow is that the programmer uses \"some language\" (most likely Python!) to express the model. This model, written in the TensorFlow constructs such as:

\n\n
h1 = tf.nn.relu(tf.matmul(l1, W1) + b1)\nh2 = ...\n
\n\n

is not actually executed when the Python is run. Instead, what's actually created is a dataflow graph that says to take particular inputs, apply particular operations, supply the results as the inputs to other operations, and so on. This model is executed by fast C++ code, and for the most part, the data going between operations is never copied back to the Python code.

\n\n

Then the programmer \"drives\" the execution of this model by pulling on nodes -- for training, usually in Python, and for serving, sometimes in Python and sometimes in raw C++:

\n\n
sess.run(eval_results)\n
\n\n

This one Python (or C++ function call) uses either an in-process call to C++ or an RPC for the distributed version to call into the C++ TensorFlow server to tell it to execute, and then copies back the results.

\n\n

So, with that said, let's re-phrase the question: Why did TensorFlow choose Python as the first well-supported language for expressing and controlling the training of models?

\n\n

The answer to that is simple: Python is probably the most comfortable language for a large range of data scientists and machine learning experts that's also that easy to integrate and have control a C++ backend, while also being general, widely-used both inside and outside of Google, and open source. Given that with the basic model of TensorFlow, the performance of Python isn't that important, it was a natural fit. It's also a huge plus that NumPy makes it easy to do pre-processing in Python -- also with high performance -- before feeding it in to TensorFlow for the truly CPU-heavy things.

\n\n

There's also a bunch of complexity in expressing the model that isn't used when executing it -- shape inference (e.g., if you do matmul(A, B), what is the shape of the resulting data?) and automatic gradient computation. It turns out to have been nice to be able to express those in Python, though I think in the long term they'll probably move to the C++ backend to make adding other languages easier.

\n\n

(The hope, of course, is to support other languages in the future for creating and expressing models. It's already quite straightforward to run inference using several other languages -- C++ works now, someone from Facebook contributed Go bindings that we're reviewing now, etc.)

\n", "system": "" }, { "instruction": "What does tf.nn.embedding_lookup function do?", "input": "", "output": "

Yes, this function is hard to understand, until you get the point.

\n\n

In its simplest form, it is similar to tf.gather. It returns the elements of params according to the indexes specified by ids.

\n\n

For example (assuming you are inside tf.InteractiveSession())

\n\n
params = tf.constant([10,20,30,40])\nids = tf.constant([0,1,2,3])\nprint tf.nn.embedding_lookup(params,ids).eval()\n
\n\n

would return [10 20 30 40], because the first element (index 0) of params is 10, the second element of params (index 1) is 20, etc.

\n\n

Similarly,

\n\n
params = tf.constant([10,20,30,40])\nids = tf.constant([1,1,3])\nprint tf.nn.embedding_lookup(params,ids).eval()\n
\n\n

would return [20 20 40].

\n\n

But embedding_lookup is more than that. The params argument can be a list of tensors, rather than a single tensor.

\n\n
params1 = tf.constant([1,2])\nparams2 = tf.constant([10,20])\nids = tf.constant([2,0,2,1,2,3])\nresult = tf.nn.embedding_lookup([params1, params2], ids)\n
\n\n

In such a case, the indexes, specified in ids, correspond to elements of tensors according to a partition strategy, where the default partition strategy is 'mod'.

\n\n

In the 'mod' strategy, index 0 corresponds to the first element of the first tensor in the list. Index 1 corresponds to the first element of the second tensor. Index 2 corresponds to the first element of the third tensor, and so on. Simply index i corresponds to the first element of the (i+1)th tensor , for all the indexes 0..(n-1), assuming params is a list of n tensors.

\n\n

Now, index n cannot correspond to tensor n+1, because the list params contains only n tensors. So index n corresponds to the second element of the first tensor. Similarly, index n+1 corresponds to the second element of the second tensor, etc.

\n\n

So, in the code

\n\n
params1 = tf.constant([1,2])\nparams2 = tf.constant([10,20])\nids = tf.constant([2,0,2,1,2,3])\nresult = tf.nn.embedding_lookup([params1, params2], ids)\n
\n\n

index 0 corresponds to the first element of the first tensor: 1

\n\n

index 1 corresponds to the first element of the second tensor: 10

\n\n

index 2 corresponds to the second element of the first tensor: 2

\n\n

index 3 corresponds to the second element of the second tensor: 20

\n\n

Thus, the result would be:

\n\n
[ 2  1  2 10  2 20]\n
\n", "system": "" }, { "instruction": "Understanding TensorBoard (weight) histograms", "input": "", "output": "

It appears that the network hasn't learned anything in the layers one to three. The last layer does change, so that means that there either may be something wrong with the gradients (if you're tampering with them manually), you're constraining learning to the last layer by optimizing only its weights or the last layer really 'eats up' all error. It could also be that only biases are learned. The network appears to learn something though, but it might not be using its full potential. More context would be needed here, but playing around with the learning rate (e.g. using a smaller one) might be worth a shot.

\n\n

In general, histograms display the number of occurrences of a value relative to each other values. Simply speaking, if the possible values are in a range of 0..9 and you see a spike of amount 10 on the value 0, this means that 10 inputs assume the value 0; in contrast, if the histogram shows a plateau of 1 for all values of 0..9, it means that for 10 inputs, each possible value 0..9 occurs exactly once.\nYou can also use histograms to visualize probability distributions when you normalize all histogram values by their total sum; if you do that, you'll intuitively obtain the likelihood with which a certain value (on the x axis) will appear (compared to other inputs).

\n\n

Now for layer1/weights, the plateau means that:

\n\n\n\n

Said differently, almost the same number of weights have the values -0.15, 0.0, 0.15 and everything in between. There are some weights having slightly smaller or higher values.\nSo in short, this simply looks like the weights have been initialized using a uniform distribution with zero mean and value range -0.15..0.15 ... give or take. If you do indeed use uniform initialization, then this is typical when the network has not been trained yet.

\n\n

In comparison, layer1/activations forms a bell curve (gaussian)-like shape: The values are centered around a specific value, in this case 0, but they may also be greater or smaller than that (equally likely so, since it's symmetric). Most values appear close around the mean of 0, but values do range from -0.8 to 0.8.\nI assume that the layer1/activations is taken as the distribution over all layer outputs in a batch. You can see that the values do change over time.

\n\n

The layer 4 histogram doesn't tell me anything specific. From the shape, it's just showing that some weight values around -0.1, 0.05 and 0.25 tend to be occur with a higher probability; a reason could be, that different parts of each neuron there actually pick up the same information and are basically redundant. This can mean that you could actually use a smaller network or that your network has the potential to learn more distinguishing features in order to prevent overfitting. These are just assumptions though.

\n\n

Also, as already stated in the comments below, do add bias units. By leaving them out, you are forcefully constraining your network to a possibly invalid solution.

\n", "system": "" }, { "instruction": "Can I run Keras model on gpu?", "input": "", "output": "

Yes you can run keras models on GPU. Few things you will have to check first.

\n
    \n
  1. your system has GPU (Nvidia. As AMD doesn't work yet)
  2. \n
  3. You have installed the GPU version of tensorflow
  4. \n
  5. You have installed CUDA installation instructions
  6. \n
  7. Verify that tensorflow is running with GPU check if GPU is working
  8. \n
\n

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

\n

for TF > v2.0

\n

sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True))

\n

(Thanks @nbro and @Ferro for pointing this out in the comments)

\n

OR

\n
from tensorflow.python.client import device_lib\nprint(device_lib.list_local_devices())\n
\n

output will be something like this:

\n
[\n  name: "/cpu:0"device_type: "CPU",\n  name: "/gpu:0"device_type: "GPU"\n]\n
\n

Once all this is done your model will run on GPU:

\n

To Check if keras(>=2.1.1) is using GPU:

\n
from keras import backend as K\nK.tensorflow_backend._get_available_gpus()\n
\n

All the best.

\n", "system": "" }, { "instruction": "How does tf.app.run() work?", "input": "", "output": "
if __name__ == \"__main__\":\n
\n\n

means current file is executed under a shell instead of imported as a module.

\n\n
tf.app.run()\n
\n\n

As you can see through the file app.py

\n\n
def run(main=None, argv=None):\n  \"\"\"Runs the program with an optional 'main' function and 'argv' list.\"\"\"\n  f = flags.FLAGS\n\n  # Extract the args from the optional `argv` list.\n  args = argv[1:] if argv else None\n\n  # Parse the known flags from that list, or from the command\n  # line otherwise.\n  # pylint: disable=protected-access\n  flags_passthrough = f._parse_flags(args=args)\n  # pylint: enable=protected-access\n\n  main = main or sys.modules['__main__'].main\n\n  # Call the main function, passing through any arguments\n  # to the final program.\n  sys.exit(main(sys.argv[:1] + flags_passthrough))\n
\n\n

Let's break line by line:

\n\n
flags_passthrough = f._parse_flags(args=args)\n
\n\n

This ensures that the argument you pass through command line is valid,e.g. \npython my_model.py --data_dir='...' --max_iteration=10000 Actually, this feature is implemented based on python standard argparse module.

\n\n
main = main or sys.modules['__main__'].main\n
\n\n

The first main in right side of = is the first argument of current function run(main=None, argv=None)\n. While sys.modules['__main__'] means current running file(e.g. my_model.py).

\n\n

So there are two cases:

\n\n
    \n
  1. You don't have a main function in my_model.py Then you have to\ncall tf.app.run(my_main_running_function)

  2. \n
  3. you have a main function in my_model.py. (This is mostly the case.)

  4. \n
\n\n

Last line:

\n\n
sys.exit(main(sys.argv[:1] + flags_passthrough))\n
\n\n

ensures your main(argv) or my_main_running_function(argv) function is called with parsed arguments properly.

\n", "system": "" }, { "instruction": "Tensorflow - ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float)", "input": "", "output": "

TL;DR Several possible errors, most fixed with x = np.asarray(x).astype('float32').

\n

Others may be faulty data preprocessing; ensure everything is properly formatted (categoricals, nans, strings, etc). Below shows what the model expects:

\n
[print(i.shape, i.dtype) for i in model.inputs]\n[print(o.shape, o.dtype) for o in model.outputs]\n[print(l.name, l.input_shape, l.dtype) for l in model.layers]\n
\n
\n

The problem's rooted in using lists as inputs, as opposed to Numpy arrays; Keras/TF doesn't support former. A simple conversion is: x_array = np.asarray(x_list).

\n

The next step's to ensure data is fed in expected format; for LSTM, that'd be a 3D tensor with dimensions (batch_size, timesteps, features) - or equivalently, (num_samples, timesteps, channels). Lastly, as a debug pro-tip, print ALL the shapes for your data. Code accomplishing all of the above, below:

\n
Sequences = np.asarray(Sequences)\nTargets   = np.asarray(Targets)\nshow_shapes()\n\nSequences = np.expand_dims(Sequences, -1)\nTargets   = np.expand_dims(Targets, -1)\nshow_shapes()\n
\n
# OUTPUTS\nExpected: (num_samples, timesteps, channels)\nSequences: (200, 1000)\nTargets:   (200,)\n\nExpected: (num_samples, timesteps, channels)\nSequences: (200, 1000, 1)\nTargets:   (200, 1)\n
\n
\n

As a bonus tip, I notice you're running via main(), so your IDE probably lacks a Jupyter-like cell-based execution; I strongly recommend the Spyder IDE. It's as simple as adding # In[], and pressing Ctrl + Enter below:

\n\n
\n

Function used:

\n
def show_shapes(): # can make yours to take inputs; this'll use local variable values\n    print("Expected: (num_samples, timesteps, channels)")\n    print("Sequences: {}".format(Sequences.shape))\n    print("Targets:   {}".format(Targets.shape))   \n
\n", "system": "" }, { "instruction": "Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation", "input": "", "output": "

Tensorflow 2.1+

\n

What's going on?

\n

With the new Tensorflow 2.1 release, the default tensorflow pip package contains both CPU and GPU versions of TF. In previous TF versions, not finding the CUDA libraries would emit an error and raise an exception, while now the library dynamically searches for the correct CUDA version and, if it doesn't find it, emits the warning (The W in the beginning stands for warnings, errors have an E (or F for fatal errors) and falls back to CPU-only mode. In fact, this is also written in the log as an info message right after the warning (do note that if you have a higher minimum log level that the default, you might not see info messages). The full log is (emphasis mine):

\n
\n

2020-01-20 12:27:44.554767: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found

\n

2020-01-20 12:27:44.554964: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

\n
\n

Should I worry? How do I fix it?

\n

If you don't have a CUDA-enabled GPU on your machine, or if you don't care about not having GPU acceleration, no need to worry. If, on the other hand, you installed tensorflow and wanted GPU acceleration, check your CUDA installation (TF 2.1 requires CUDA 10.1, not 10.2 or 10.0).

\n

If you just want to get rid of the warning, you can adapt TF's logging level to suppress warnings, but that might be overkill, as it will silence all warnings.

\n
\n

Tensorflow 1.X or 2.0:

\n

Your CUDA setup is broken, ensure you have the correct version installed.

\n", "system": "" }, { "instruction": "Deep-Learning Nan loss reasons", "input": "", "output": "

There are lots of things I have seen make a model diverge.

\n
    \n
  1. Too high of a learning rate. You can often tell if this is the case if the loss begins to increase and then diverges to infinity.

    \n
  2. \n
  3. I am not to familiar with the DNNClassifier but I am guessing it uses the categorical cross entropy cost function. This involves taking the log of the prediction which diverges as the prediction approaches zero. That is why people usually add a small epsilon value to the prediction to prevent this divergence. I am guessing the DNNClassifier probably does this or uses the tensorflow opp for it. Probably not the issue.

    \n
  4. \n
  5. Other numerical stability issues can exist such as division by zero where adding the epsilon can help. Another less obvious one if the square root whose derivative can diverge if not properly simplified when dealing with finite precision numbers. Yet again I doubt this is the issue in the case of the DNNClassifier.

    \n
  6. \n
  7. You may have an issue with the input data. Try calling assert not np.any(np.isnan(x)) on the input data to make sure you are not introducing the nan. Also make sure all of the target values are valid. Finally, make sure the data is properly normalized. You probably want to have the pixels in the range [-1, 1] and not [0, 255].

    \n
  8. \n
  9. The labels must be in the domain of the loss function, so if using a logarithmic-based loss function all labels must be non-negative (as noted by evan pu and the comments below).

    \n
  10. \n
\n", "system": "" }, { "instruction": "What does this tensorflow message mean? Any side effect? Was the installation successful?", "input": "", "output": "

An important part of Tensorflow is that it is supposed to be fast. With a suitable installation, it works with CPUs, GPUs, or TPUs. Part of going fast means that it uses different code depending on your hardware. Some CPUs support operations that other CPUs do not, such as vectorized addition (adding multiple variables at once). Tensorflow is simply telling you that the version you have installed can use the AVX and AVX2 operations and is set to do so by default in certain situations (say inside a forward or back-prop matrix multiply), which can speed things up. This is not an error, it is just telling you that it can and will take advantage of your CPU to get that extra speed out.

\n

Note: AVX stands for Advanced Vector Extensions.

\n", "system": "" }, { "instruction": "What does tf.nn.conv2d do in tensorflow?", "input": "", "output": "

Ok I think this is about the simplest way to explain it all.

\n\n
\n\n

Your example is 1 image, size 2x2, with 1 channel. You have 1 filter, with size 1x1, and 1 channel (size is height x width x channels x number of filters).

\n\n

For this simple case the resulting 2x2, 1 channel image (size 1x2x2x1, number of images x height x width x x channels) is the result of multiplying the filter value by each pixel of the image.

\n\n
\n\n

Now let's try more channels:

\n\n
input = tf.Variable(tf.random_normal([1,3,3,5]))\nfilter = tf.Variable(tf.random_normal([1,1,5,1]))\n\nop = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='VALID')\n
\n\n

Here the 3x3 image and the 1x1 filter each have 5 channels. The resulting image will be 3x3 with 1 channel (size 1x3x3x1), where the value of each pixel is the dot product across channels of the filter with the corresponding pixel in the input image.

\n\n
\n\n

Now with a 3x3 filter

\n\n
input = tf.Variable(tf.random_normal([1,3,3,5]))\nfilter = tf.Variable(tf.random_normal([3,3,5,1]))\n\nop = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='VALID')\n
\n\n

Here we get a 1x1 image, with 1 channel (size 1x1x1x1). The value is the sum of the 9, 5-element dot products. But you could just call this a 45-element dot product.

\n\n
\n\n

Now with a bigger image

\n\n
input = tf.Variable(tf.random_normal([1,5,5,5]))\nfilter = tf.Variable(tf.random_normal([3,3,5,1]))\n\nop = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='VALID')\n
\n\n

The output is a 3x3 1-channel image (size 1x3x3x1). \nEach of these values is a sum of 9, 5-element dot products.

\n\n

Each output is made by centering the filter on one of the 9 center pixels of the input image, so that none of the filter sticks out. The xs below represent the filter centers for each output pixel.

\n\n
.....\n.xxx.\n.xxx.\n.xxx.\n.....\n
\n\n
\n\n

Now with \"SAME\" padding:

\n\n
input = tf.Variable(tf.random_normal([1,5,5,5]))\nfilter = tf.Variable(tf.random_normal([3,3,5,1]))\n\nop = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='SAME')\n
\n\n

This gives a 5x5 output image (size 1x5x5x1). This is done by centering the filter at each position on the image.

\n\n

Any of the 5-element dot products where the filter sticks out past the edge of the image get a value of zero.

\n\n

So the corners are only sums of 4, 5-element dot products.

\n\n
\n\n

Now with multiple filters.

\n\n
input = tf.Variable(tf.random_normal([1,5,5,5]))\nfilter = tf.Variable(tf.random_normal([3,3,5,7]))\n\nop = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='SAME')\n
\n\n

This still gives a 5x5 output image, but with 7 channels (size 1x5x5x7). Where each channel is produced by one of the filters in the set.

\n\n
\n\n

Now with strides 2,2:

\n\n
input = tf.Variable(tf.random_normal([1,5,5,5]))\nfilter = tf.Variable(tf.random_normal([3,3,5,7]))\n\nop = tf.nn.conv2d(input, filter, strides=[1, 2, 2, 1], padding='SAME')\n
\n\n

Now the result still has 7 channels, but is only 3x3 (size 1x3x3x7).

\n\n

This is because instead of centering the filters at every point on the image, the filters are centered at every other point on the image, taking steps (strides) of width 2. The x's below represent the filter center for each output pixel, on the input image.

\n\n
x.x.x\n.....\nx.x.x\n.....\nx.x.x\n
\n\n
\n\n

And of course the first dimension of the input is the number of images so you can apply it over a batch of 10 images, for example:

\n\n
input = tf.Variable(tf.random_normal([10,5,5,5]))\nfilter = tf.Variable(tf.random_normal([3,3,5,7]))\n\nop = tf.nn.conv2d(input, filter, strides=[1, 2, 2, 1], padding='SAME')\n
\n\n

This performs the same operation, for each image independently, giving a stack of 10 images as the result (size 10x3x3x7)

\n", "system": "" }, { "instruction": "Loading a trained Keras model and continue training", "input": "", "output": "

Actually - model.save saves all information need for restarting training in your case. The only thing which could be spoiled by reloading model is your optimizer state. To check that - try to save and reload model and train it on training data.

\n", "system": "" }, { "instruction": "How are the new tf.contrib.summary summaries in TensorFlow evaluated?", "input": "", "output": "

answer moved from edit to self-answer as requested

\n
\n

I just played around with this a little bit, and it seems that if one combines tf.control_dependencies with tf.record_summaries_every_n_global_steps it behaves as expected and the summary only gets recorded every nth step. But if they are run together within a session, such as session.run([train, summs]), the summaries are stored every once in a while, but not exactly every nth step. I tested this with n=2 and with the second approach the summary was often written at odd steps, while with the control dependency approach it was always on an even step.

\n", "system": "" }, { "instruction": "In Tensorflow, get the names of all the Tensors in a graph", "input": "", "output": "

You can do

\n\n
[n.name for n in tf.get_default_graph().as_graph_def().node]\n
\n\n

Also, if you are prototyping in an IPython notebook, you can show the graph directly in notebook, see show_graph function in Alexander's Deep Dream notebook

\n", "system": "" }, { "instruction": "Should we do learning rate decay for adam optimizer", "input": "", "output": "

It depends. ADAM updates any parameter with an individual learning rate. This means that every parameter in the network has a specific learning rate associated.

\n

But the single learning rate for each parameter is computed using lambda (the initial learning rate) as an upper limit. This means that every single learning rate can vary from 0 (no update) to lambda (maximum update).

\n

It's true, that the learning rates adapt themselves during training steps, but if you want to be sure that every update step doesn't exceed lambda you can than lower lambda using exponential decay or whatever.\nIt can help to reduce loss during the latest step of training, when the computed loss with the previously associated lambda parameter has stopped to decrease.

\n", "system": "" }, { "instruction": "TensorFlow, why there are 3 files after saving the model?", "input": "", "output": "

Try this:

\n\n
with tf.Session() as sess:\n    saver = tf.train.import_meta_graph('/tmp/model.ckpt.meta')\n    saver.restore(sess, \"/tmp/model.ckpt\")\n
\n\n

The TensorFlow save method saves three kinds of files because it stores the graph structure separately from the variable values. The .meta file describes the saved graph structure, so you need to import it before restoring the checkpoint (otherwise it doesn't know what variables the saved checkpoint values correspond to).

\n\n

Alternatively, you could do this:

\n\n
# Recreate the EXACT SAME variables\nv1 = tf.Variable(..., name=\"v1\")\nv2 = tf.Variable(..., name=\"v2\")\n\n...\n\n# Now load the checkpoint variable values\nwith tf.Session() as sess:\n    saver = tf.train.Saver()\n    saver.restore(sess, \"/tmp/model.ckpt\")\n
\n\n

Even though there is no file named model.ckpt, you still refer to the saved checkpoint by that name when restoring it. From the saver.py source code:

\n\n
\n

Users only need to interact with the user-specified prefix... instead\n of any physical pathname.

\n
\n", "system": "" }, { "instruction": "Difference between Variable and get_variable in TensorFlow", "input": "", "output": "

I'd recommend to always use tf.get_variable(...) -- it will make it way easier to refactor your code if you need to share variables at any time, e.g. in a multi-gpu setting (see the multi-gpu CIFAR example). There is no downside to it.

\n\n

Pure tf.Variable is lower-level; at some point tf.get_variable() did not exist so some code still uses the low-level way.

\n", "system": "" }, { "instruction": "TensorFlow, "'module' object has no attribute 'placeholder'"", "input": "", "output": "

If you have this error after an upgrade to TensorFlow 2.0, you can still use 1.X API by replacing:

\n\n
import tensorflow as tf\n
\n\n

by

\n\n
import tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\n
\n", "system": "" }, { "instruction": "Meaning of buffer_size in Dataset.map , Dataset.prefetch and Dataset.shuffle", "input": "", "output": "

TL;DR Despite their similar names, these arguments have quite difference meanings. The buffer_size in Dataset.shuffle() can affect the randomness of your dataset, and hence the order in which elements are produced. The buffer_size in Dataset.prefetch() only affects the time it takes to produce the next element.

\n\n
\n\n

The buffer_size argument in tf.data.Dataset.prefetch() and the output_buffer_size argument in tf.contrib.data.Dataset.map() provide a way to tune the performance of your input pipeline: both arguments tell TensorFlow to create a buffer of at most buffer_size elements, and a background thread to fill that buffer in the background. \n(Note that we removed the output_buffer_size argument from Dataset.map() when it moved from tf.contrib.data to tf.data. New code should use Dataset.prefetch() after map() to get the same behavior.)

\n\n

Adding a prefetch buffer can improve performance by overlapping the preprocessing of data with downstream computation. Typically it is most useful to add a small prefetch buffer (with perhaps just a single element) at the very end of the pipeline, but more complex pipelines can benefit from additional prefetching, especially when the time to produce a single element can vary.

\n\n

By contrast, the buffer_size argument to tf.data.Dataset.shuffle() affects the randomness of the transformation. We designed the Dataset.shuffle() transformation (like the tf.train.shuffle_batch() function that it replaces) to handle datasets that are too large to fit in memory. Instead of shuffling the entire dataset, it maintains a buffer of buffer_size elements, and randomly selects the next element from that buffer (replacing it with the next input element, if one is available). Changing the value of buffer_size affects how uniform the shuffling is: if buffer_size is greater than the number of elements in the dataset, you get a uniform shuffle; if it is 1 then you get no shuffling at all. For very large datasets, a typical \"good enough\" approach is to randomly shard the data into multiple files once before training, then shuffle the filenames uniformly, and then use a smaller shuffle buffer. However, the appropriate choice will depend on the exact nature of your training job.

\n\n
\n", "system": "" }, { "instruction": "Keras split train test set when using ImageDataGenerator", "input": "", "output": "

Keras has now added Train / validation split from a single directory using ImageDataGenerator:

\n\n
train_datagen = ImageDataGenerator(rescale=1./255,\n    shear_range=0.2,\n    zoom_range=0.2,\n    horizontal_flip=True,\n    validation_split=0.2) # set validation split\n\ntrain_generator = train_datagen.flow_from_directory(\n    train_data_dir,\n    target_size=(img_height, img_width),\n    batch_size=batch_size,\n    class_mode='binary',\n    subset='training') # set as training data\n\nvalidation_generator = train_datagen.flow_from_directory(\n    train_data_dir, # same directory as training data\n    target_size=(img_height, img_width),\n    batch_size=batch_size,\n    class_mode='binary',\n    subset='validation') # set as validation data\n\nmodel.fit_generator(\n    train_generator,\n    steps_per_epoch = train_generator.samples // batch_size,\n    validation_data = validation_generator, \n    validation_steps = validation_generator.samples // batch_size,\n    epochs = nb_epochs)\n
\n\n

https://keras.io/preprocessing/image/

\n", "system": "" }, { "instruction": "What's the purpose of tf.app.flags in TensorFlow?", "input": "", "output": "

The tf.app.flags module is presently a thin wrapper around python-gflags, so the documentation for that project is the best resource for how to use it argparse, which implements a subset of the functionality in python-gflags.

\n\n

Note that this module is currently packaged as a convenience for writing demo apps, and is not technically part of the public API, so it may change in future.

\n\n

We recommend that you implement your own flag parsing using argparse or whatever library you prefer.

\n\n

EDIT: The tf.app.flags module is not in fact implemented using python-gflags, but it uses a similar API.

\n", "system": "" }, { "instruction": "Tensorflow Strides Argument", "input": "", "output": "

The pooling and convolutional ops slide a \"window\" across the input tensor. Using tf.nn.conv2d as an example: If the input tensor has 4 dimensions: [batch, height, width, channels], then the convolution operates on a 2D window on the height, width dimensions.

\n\n

strides determines how much the window shifts by in each of the dimensions. The typical use sets the first (the batch) and last (the depth) stride to 1.

\n\n

Let's use a very concrete example: Running a 2-d convolution over a 32x32 greyscale input image. I say greyscale because then the input image has depth=1, which helps keep it simple. Let that image look like this:

\n\n
00 01 02 03 04 ...\n10 11 12 13 14 ...\n20 21 22 23 24 ...\n30 31 32 33 34 ...\n...\n
\n\n

Let's run a 2x2 convolution window over a single example (batch size = 1). We'll give the convolution an output channel depth of 8.

\n\n

The input to the convolution has shape=[1, 32, 32, 1].

\n\n

If you specify strides=[1,1,1,1] with padding=SAME, then the output of the filter will be [1, 32, 32, 8].

\n\n

The filter will first create an output for:

\n\n
F(00 01\n  10 11)\n
\n\n

And then for:

\n\n
F(01 02\n  11 12)\n
\n\n

and so on. Then it will move to the second row, calculating:

\n\n
F(10, 11\n  20, 21)\n
\n\n

then

\n\n
F(11, 12\n  21, 22)\n
\n\n

If you specify a stride of [1, 2, 2, 1] it won't do overlapping windows. It will compute:

\n\n
F(00, 01\n  10, 11)\n
\n\n

and then

\n\n
F(02, 03\n  12, 13)\n
\n\n

The stride operates similarly for the pooling operators.

\n\n

Question 2: Why strides [1, x, y, 1] for convnets

\n\n

The first 1 is the batch: You don't usually want to skip over examples in your batch, or you shouldn't have included them in the first place. :)

\n\n

The last 1 is the depth of the convolution: You don't usually want to skip inputs, for the same reason.

\n\n

The conv2d operator is more general, so you could create convolutions that slide the window along other dimensions, but that's not a typical use in convnets. The typical use is to use them spatially.

\n\n

Why reshape to -1 -1 is a placeholder that says \"adjust as necessary to match the size needed for the full tensor.\" It's a way of making the code be independent of the input batch size, so that you can change your pipeline and not have to adjust the batch size everywhere in the code.

\n", "system": "" }, { "instruction": "What's the difference between sparse_softmax_cross_entropy_with_logits and softmax_cross_entropy_with_logits?", "input": "", "output": "

Having two different functions is a convenience, as they produce the same result.

\n\n

The difference is simple:

\n\n\n\n

Labels used in softmax_cross_entropy_with_logits are the one hot version of labels used in sparse_softmax_cross_entropy_with_logits.

\n\n

Another tiny difference is that with sparse_softmax_cross_entropy_with_logits, you can give -1 as a label to have loss 0 on this label.

\n", "system": "" }, { "instruction": "Will scikit-learn utilize GPU?", "input": "", "output": "

Tensorflow only uses GPU if it is built against Cuda and CuDNN. By default it does not use GPU, especially if it is running inside Docker, unless you use nvidia-docker and an image with a built-in support.

\n

Scikit-learn is not intended to be used as a deep-learning framework and it does not provide any GPU support.

\n
\n

Why is there no support for deep or reinforcement learning / Will there be support for deep or reinforcement learning in scikit-learn?

\n

Deep learning and reinforcement learning both require a rich\nvocabulary to define an architecture, with deep learning additionally\nrequiring GPUs for efficient computing. However, neither of these fit\nwithin the design constraints of scikit-learn; as a result, deep\nlearning and reinforcement learning are currently out of scope for\nwhat scikit-learn seeks to achieve.

\n
\n

Extracted from http://scikit-learn.org/stable/faq.html#why-is-there-no-support-for-deep-or-reinforcement-learning-will-there-be-support-for-deep-or-reinforcement-learning-in-scikit-learn

\n
\n

Will you add GPU support in scikit-learn?

\n

No, or at least not in the near future. The main reason is that GPU\nsupport will introduce many software dependencies and introduce\nplatform specific issues. scikit-learn is designed to be easy to\ninstall on a wide variety of platforms. Outside of neural networks,\nGPUs don\u2019t play a large role in machine learning today, and much\nlarger gains in speed can often be achieved by a careful choice of\nalgorithms.

\n
\n

Extracted from http://scikit-learn.org/stable/faq.html#will-you-add-gpu-support

\n", "system": "" }, { "instruction": "Can Keras with Tensorflow backend be forced to use CPU or GPU at will?", "input": "", "output": "

If you want to force Keras to use CPU

\n\n

Way 1

\n\n
import os\nos.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"   # see issue #152\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"\"\n
\n\n

before Keras / Tensorflow is imported.

\n\n

Way 2

\n\n

Run your script as

\n\n
$ CUDA_VISIBLE_DEVICES=\"\" ./your_keras_code.py\n
\n\n

See also

\n\n
    \n
  1. https://github.com/keras-team/keras/issues/152
  2. \n
  3. https://github.com/fchollet/keras/issues/4613
  4. \n
\n", "system": "" }, { "instruction": "How to apply gradient clipping in TensorFlow?", "input": "", "output": "

Gradient clipping needs to happen after computing the gradients, but before applying them to update the model's parameters. In your example, both of those things are handled by the AdamOptimizer.minimize() method.

\n

In order to clip your gradients you'll need to explicitly compute, clip, and apply them as described in this section in TensorFlow's API documentation. Specifically you'll need to substitute the call to the minimize() method with something like the following:

\n
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)\ngvs = optimizer.compute_gradients(cost)\ncapped_gvs = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gvs]\ntrain_op = optimizer.apply_gradients(capped_gvs)\n
\n", "system": "" }, { "instruction": "ImportError: No module named tensorflow", "input": "", "output": "

Try installing tensorflow again with the whatever version you want and with option --ignore-installed like:

\n\n
pip install tensorflow==1.2.0 --ignore-installed\n
\n\n

I solved same issue using this command.

\n", "system": "" }, { "instruction": "Does model.compile() initialize all the weights and biases in Keras (tensorflow backend)?", "input": "", "output": "

When to use?

\n\n

If you're using compile, surely it must be after load_model(). After all, you need a model to compile. (PS: load_model automatically compiles the model with the optimizer that was saved along with the model)

\n\n

What does compile do?

\n\n

Compile defines the loss function, the optimizer and the metrics. That's all.

\n\n

It has nothing to do with the weights and you can compile a model as many times as you want without causing any problem to pretrained weights.

\n\n

You need a compiled model to train (because training uses the loss function and the optimizer). But it's not necessary to compile a model for predicting.

\n\n

Do you need to use compile more than once?

\n\n

Only if:

\n\n\n\n

Consequences of compiling again:

\n\n

If you compile a model again, you will lose the optimizer states.

\n\n

This means that your training will suffer a little at the beginning until it adjusts the learning rate, the momentums, etc. But there is absolutely no damage to the weights (unless, of course, your initial learning rate is so big that the first training step wildly changes the fine tuned weights).

\n", "system": "" }, { "instruction": "What is the difference between sparse_categorical_crossentropy and categorical_crossentropy?", "input": "", "output": "

Simply:

\n\n

Consider a classification problem with 5 categories (or classes).

\n\n

Consider now a classification problem with 3 classes.

\n\n

Many categorical models produce scce output because you save space, but lose A LOT of information (for example, in the 2nd example, index 2 was also very close.) I generally prefer cce output for model reliability.

\n

There are a number of situations to use scce, including:

\n\n

220405: response to "one-hot encoding" comments:

\n

one-hot encoding is used for a category feature INPUT to select a specific category (e.g. male versus female). This encoding allows the model to train more efficiently: training weight is a product of category, which is 0 for all categories except for the given one.

\n

cce and scce are a model OUTPUT. cce is a probability array of each category, totally 1.0. scce shows the MOST LIKELY category, totally 1.0.

\n

scce is technically a one-hot array, just like a hammer used as a door stop is still a hammer, but its purpose is different. cce is NOT one-hot.

\n", "system": "" }, { "instruction": "Can I use TensorBoard with Google Colab?", "input": "", "output": "

EDIT: You probably want to give the official %tensorboard magic a go, available from TensorFlow 1.13 onward.

\n
\n\n

Prior to the existence of the %tensorboard magic, the standard way to\nachieve this was to proxy network traffic to the Colab VM using\nngrok. A Colab example can be found here.

\n

These are the steps (the code snippets represent cells of type "code" in colab):

\n
    \n
  1. Get TensorBoard running in the background.
    \nInspired by this answer.

    \n
    LOG_DIR = '/tmp/log'\nget_ipython().system_raw(\n    'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'\n    .format(LOG_DIR)\n)\n
    \n
  2. \n
  3. Download and unzip ngrok.
    \nReplace the link passed to wget with the correct download link for your OS.

    \n
    ! wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip\n! unzip ngrok-stable-linux-amd64.zip\n
    \n
  4. \n
  5. Launch ngrok background process...

    \n
    get_ipython().system_raw('./ngrok http 6006 &')\n
    \n
  6. \n
\n

...and retrieve public url.\nSource

\n
    ! curl -s http://localhost:4040/api/tunnels | python3 -c \\\n        "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"\n
\n", "system": "" }, { "instruction": "How to set adaptive learning rate for GradientDescentOptimizer?", "input": "", "output": "

First of all, tf.train.GradientDescentOptimizer is designed to use a constant learning rate for all variables in all steps. TensorFlow also provides out-of-the-box adaptive optimizers including the tf.train.AdagradOptimizer and the tf.train.AdamOptimizer, and these can be used as drop-in replacements.

\n\n

However, if you want to control the learning rate with otherwise-vanilla gradient descent, you can take advantage of the fact that the learning_rate argument to the tf.train.GradientDescentOptimizer constructor can be a Tensor object. This allows you to compute a different value for the learning rate in each step, for example:

\n\n
learning_rate = tf.placeholder(tf.float32, shape=[])\n# ...\ntrain_step = tf.train.GradientDescentOptimizer(\n    learning_rate=learning_rate).minimize(mse)\n\nsess = tf.Session()\n\n# Feed different values for learning rate to each training step.\nsess.run(train_step, feed_dict={learning_rate: 0.1})\nsess.run(train_step, feed_dict={learning_rate: 0.1})\nsess.run(train_step, feed_dict={learning_rate: 0.01})\nsess.run(train_step, feed_dict={learning_rate: 0.01})\n
\n\n

Alternatively, you could create a scalar tf.Variable that holds the learning rate, and assign it each time you want to change the learning rate.

\n", "system": "" }, { "instruction": "TensorFlow saving into/loading a graph from a file", "input": "", "output": "

There are many ways to approach the problem of saving a model in TensorFlow, which can make it a bit confusing. Taking each of your sub-questions in turn:

\n\n
    \n
  1. The checkpoint files (produced e.g. by calling saver.save() on a tf.train.Saver object) contain only the weights, and any other variables defined in the same program. To use them in another program, you must re-create the associated graph structure (e.g. by running code to build it again, or calling tf.import_graph_def()), which tells TensorFlow what to do with those weights. Note that calling saver.save() also produces a file containing a MetaGraphDef, which contains a graph and details of how to associate the weights from a checkpoint with that graph. See the tutorial for more details.

  2. \n
  3. tf.train.write_graph() only writes the graph structure; not the weights.

  4. \n
  5. Bazel is unrelated to reading or writing TensorFlow graphs. (Perhaps I misunderstand your question: feel free to clarify it in a comment.)

  6. \n
  7. A frozen graph can be loaded using tf.import_graph_def(). In this case, the weights are (typically) embedded in the graph, so you don't need to load a separate checkpoint.

  8. \n
  9. The main change would be to update the names of the tensor(s) that are fed into the model, and the names of the tensor(s) that are fetched from the model. In the TensorFlow Android demo, this would correspond to the inputName and outputName strings that are passed to TensorFlowClassifier.initializeTensorFlow().

  10. \n
  11. The GraphDef is the program structure, which typically does not change through the training process. The checkpoint is a snapshot of the state of a training process, which typically changes at every step of the training process. As a result, TensorFlow uses different storage formats for these types of data, and the low-level API provides different ways to save and load them. Higher-level libraries, such as the MetaGraphDef libraries, Keras, and skflow build on these mechanisms to provide more convenient ways to save and restore an entire model.

  12. \n
\n", "system": "" }, { "instruction": "What is the difference between np.mean and tf.reduce_mean?", "input": "", "output": "

The functionality of numpy.mean and tensorflow.reduce_mean are the same. They do the same thing. From the documentation, for numpy and tensorflow, you can see that. Lets look at an example,

\n
c = np.array([[3.,4], [5.,6], [6.,7]])\nprint(np.mean(c,1))\n\nMean = tf.reduce_mean(c,1)\nwith tf.Session() as sess:\n    result = sess.run(Mean)\n    print(result)\n
\n

Output

\n
[ 3.5  5.5  6.5]\n[ 3.5  5.5  6.5]\n
\n

Here you can see that when axis(numpy) or reduction_indices(tensorflow) is 1, it computes mean across (3,4) and (5,6) and (6,7), so 1 defines across which axis the mean is computed. When it is 0, the mean is computed across(3,5,6) and (4,6,7), and so on. I hope you get the idea.

\n

Now what are the differences between them?

\n

You can compute the numpy operation anywhere on python. But in order to do a tensorflow operation, it must be done inside a tensorflow Session. You can read more about it here. So when you need to perform any computation for your tensorflow graph(or structure if you will), it must be done inside a tensorflow Session.

\n

Lets look at another example.

\n
npMean = np.mean(c)\nprint(npMean+1)\n\ntfMean = tf.reduce_mean(c)\nAdd = tfMean + 1\nwith tf.Session() as sess:\n    result = sess.run(Add)\n    print(result)\n
\n

We could increase mean by 1 in numpy as you would naturally, but in order to do it in tensorflow, you need to perform that in Session, without using Session you can't do that. In other words, when you are computing tfMean = tf.reduce_mean(c), tensorflow doesn't compute it then. It only computes that in a Session. But numpy computes that instantly, when you write np.mean().

\n

I hope it makes sense.

\n", "system": "" }, { "instruction": "What does global_step mean in Tensorflow?", "input": "", "output": "

global_step refers to the number of batches seen by the graph. Every time a batch is provided, the weights are updated in the direction that minimizes the loss. global_step just keeps track of the number of batches seen so far. When it is passed in the minimize() argument list, the variable is increased by one. Have a look at optimizer.minimize().

\n\n

You can get the global_step value using tf.train.global_step().\nAlso handy are the utility methods tf.train.get_global_step or tf.train.get_or_create_global_step.

\n\n

0 is the initial value of the global step in this context.

\n", "system": "" }, { "instruction": "AttributeError: 'Tensor' object has no attribute 'numpy'", "input": "", "output": "

Since the accepted answer did not solve the problem for me so I thought it might be helpful for some people who face the problem and that already have tensorflow version >= 2.2.0 and eager execution enabled.

\n

The issue seems to be that for certain functions during the fitting model.fit()\nthe @tf.function decorator prohibits the execution of functions like tensor.numpy() for performance reasons.

\n

The solution for me was to pass the flag run_eagerly=True to the model.compile() like this:

\n
model.compile(..., run_eagerly=True)\n
\n", "system": "" }, { "instruction": "How to get stable results with TensorFlow, setting random seed", "input": "", "output": "

Setting the current TensorFlow random seed affects the current default graph only. Since you are creating a new graph for your training and setting it as default (with g.as_default():), you must set the random seed within the scope of that with block.

\n\n

For example, your loop should look like the following:

\n\n
for i in range(3):\n  g = tf.Graph()\n  with g.as_default():\n    tf.set_random_seed(1)\n    accuracy_result, average_error = network.train_network(\n        parameters, inputHeight, inputWidth, inputChannels, outputClasses)\n
\n\n

Note that this will use the same random seed for each iteration of the outer for loop. If you want to use a different—but still deterministic—seed in each iteration, you can use tf.set_random_seed(i + 1).

\n", "system": "" }, { "instruction": "Tensorflow set CUDA_VISIBLE_DEVICES within jupyter", "input": "", "output": "

You can set environment variables in the notebook using os.environ. Do the following before initializing TensorFlow to limit TensorFlow to first GPU.

\n\n
import os\nos.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\"   # see issue #152\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"\n
\n\n

You can double check that you have the correct devices visible to TF

\n\n
from tensorflow.python.client import device_lib\nprint device_lib.list_local_devices()\n
\n\n

I tend to use it from utility module like notebook_util

\n\n
import notebook_util\nnotebook_util.pick_gpu_lowest_memory()\nimport tensorflow as tf\n
\n", "system": "" }, { "instruction": "How to get Tensorflow tensor dimensions (shape) as int values?", "input": "", "output": "

To get the shape as a list of ints, do tensor.get_shape().as_list().

\n\n

To complete your tf.shape() call, try tensor2 = tf.reshape(tensor, tf.TensorShape([num_rows*num_cols, 1])). Or you can directly do tensor2 = tf.reshape(tensor, tf.TensorShape([-1, 1])) where its first dimension can be inferred.

\n", "system": "" }, { "instruction": "How to solve "AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key"?", "input": "", "output": "

The protoc version I got through pip show protobuf and protoc --version were different. The version in pip was a bit outdated.

\n

After I upgraded the pip version with

\n
pip install --upgrade protobuf\n
\n

the problem was solved.

\n", "system": "" }, { "instruction": "What does batch, repeat, and shuffle do with TensorFlow Dataset?", "input": "", "output": "

Update: Here is a small collaboration notebook for demonstration of this answer.

\n
\n

Imagine, you have a dataset: [1, 2, 3, 4, 5, 6], then:

\n

How ds.shuffle() works

\n

dataset.shuffle(buffer_size=3) will allocate a buffer of size 3 for picking random entries. This buffer will be connected to the source dataset.\nWe could image it like this:

\n
Random buffer\n   |\n   |   Source dataset where all other elements live\n   |         |\n   \u2193         \u2193\n[1,2,3] <= [4,5,6]\n
\n

Let's assume that entry 2 was taken from the random buffer. Free space is filled by the next element from the source buffer, that is 4:

\n
2 <= [1,3,4] <= [5,6]\n
\n

We continue reading till nothing is left:

\n
1 <= [3,4,5] <= [6]\n5 <= [3,4,6] <= []\n3 <= [4,6]   <= []\n6 <= [4]     <= []\n4 <= []      <= []\n
\n

How ds.repeat() works

\n

As soon as all the entries are read from the dataset and you try to read the next element, the dataset will throw an error.\nThat's where ds.repeat() comes into play. It will re-initialize the dataset, making it again like this:

\n
[1,2,3] <= [4,5,6]\n
\n

What will ds.batch() produce

\n

The ds.batch() will take the first batch_size entries and make a batch out of them. So, a batch size of 3 for our example dataset will produce two batch records:

\n
[2,1,5]\n[3,6,4]\n
\n

As we have a ds.repeat() before the batch, the generation of the data will continue. But the order of the elements will be different, due to the ds.random(). What should be taken into account is that 6 will never be present in the first batch, due to the size of the random buffer.

\n", "system": "" }, { "instruction": "When importing tensorflow, I get the following error: No module named 'numpy.core._multiarray_umath'", "input": "", "output": "

I also had the same issue.\nIt got resloved once I upgraded the numpy from 1.15.4 to 1.16.1.

\n

If you're using pip:\npip install numpy --upgrade

\n

Numpy that came with Anaconda3 is of version 1.15.4. so i upgraded and it worked.

\n
\n

Side note: if you're also using scikit-image in your script, be aware that numpy 1.16.3 has a conflict with old versions of scikit-image (e.g. you may get ImportError: cannot import name '_validate_lengths'). In that case, pip install --upgrade scikit-image from terminal solved the issue for me.

\n", "system": "" }, { "instruction": "Using a pre-trained word embedding (word2vec or Glove) in TensorFlow", "input": "", "output": "

There are a few ways that you can use a pre-trained embedding in TensorFlow. Let's say that you have the embedding in a NumPy array called embedding, with vocab_size rows and embedding_dim columns and you want to create a tensor W that can be used in a call to tf.nn.embedding_lookup().

\n\n
    \n
  1. Simply create W as a tf.constant() that takes embedding as its value:

    \n\n
    W = tf.constant(embedding, name=\"W\")\n
    \n\n

    This is the easiest approach, but it is not memory efficient because the value of a tf.constant() is stored multiple times in memory. Since embedding can be very large, you should only use this approach for toy examples.

  2. \n
  3. Create W as a tf.Variable and initialize it from the NumPy array via a tf.placeholder():

    \n\n
    W = tf.Variable(tf.constant(0.0, shape=[vocab_size, embedding_dim]),\n                trainable=False, name=\"W\")\n\nembedding_placeholder = tf.placeholder(tf.float32, [vocab_size, embedding_dim])\nembedding_init = W.assign(embedding_placeholder)\n\n# ...\nsess = tf.Session()\n\nsess.run(embedding_init, feed_dict={embedding_placeholder: embedding})\n
    \n\n

    This avoid storing a copy of embedding in the graph, but it does require enough memory to keep two copies of the matrix in memory at once (one for the NumPy array, and one for the tf.Variable). Note that I've assumed that you want to hold the embedding matrix constant during training, so W is created with trainable=False.

  4. \n
  5. If the embedding was trained as part of another TensorFlow model, you can use a tf.train.Saver to load the value from the other model's checkpoint file. This means that the embedding matrix can bypass Python altogether. Create W as in option 2, then do the following:

    \n\n
    W = tf.Variable(...)\n\nembedding_saver = tf.train.Saver({\"name_of_variable_in_other_model\": W})\n\n# ...\nsess = tf.Session()\nembedding_saver.restore(sess, \"checkpoint_filename.ckpt\")\n
  6. \n
\n", "system": "" }, { "instruction": "How to stack multiple lstm in keras?", "input": "", "output": "

You need to add return_sequences=True to the first layer so that its output tensor has ndim=3 (i.e. batch size, timesteps, hidden state).

\n\n

Please see the following example:

\n\n
# expected input data shape: (batch_size, timesteps, data_dim)\nmodel = Sequential()\nmodel.add(LSTM(32, return_sequences=True,\n               input_shape=(timesteps, data_dim)))  # returns a sequence of vectors of dimension 32\nmodel.add(LSTM(32, return_sequences=True))  # returns a sequence of vectors of dimension 32\nmodel.add(LSTM(32))  # return a single vector of dimension 32\nmodel.add(Dense(10, activation='softmax'))\n
\n\n

From: https://keras.io/getting-started/sequential-model-guide/ (search for \"stacked lstm\")

\n", "system": "" }, { "instruction": "How to choose cross-entropy loss in TensorFlow?", "input": "", "output": "

Preliminary facts

\n\n\n\n

Sigmoid functions family

\n\n\n\n

As stated earlier, sigmoid loss function is for binary classification.\nBut tensorflow functions are more general and allow to do\nmulti-label classification, when the classes are independent.\nIn other words, tf.nn.sigmoid_cross_entropy_with_logits solves N\nbinary classifications at once.

\n\n

The labels must be one-hot encoded or can contain soft class probabilities.

\n\n

tf.losses.sigmoid_cross_entropy in addition allows to set the in-batch weights,\ni.e. make some examples more important than others.\ntf.nn.weighted_cross_entropy_with_logits allows to set class weights\n(remember, the classification is binary), i.e. make positive errors larger than\nnegative errors. This is useful when the training data is unbalanced.

\n\n

Softmax functions family

\n\n\n\n

These loss functions should be used for multinomial mutually exclusive classification,\ni.e. pick one out of N classes. Also applicable when N = 2.

\n\n

The labels must be one-hot encoded or can contain soft class probabilities:\na particular example can belong to class A with 50% probability and class B\nwith 50% probability. Note that strictly speaking it doesn't mean that\nit belongs to both classes, but one can interpret the probabilities this way.

\n\n

Just like in sigmoid family, tf.losses.softmax_cross_entropy allows\nto set the in-batch weights, i.e. make some examples more important than others.\nAs far as I know, as of tensorflow 1.3, there's no built-in way to set class weights.

\n\n

[UPD] In tensorflow 1.5, v2 version was introduced and the original softmax_cross_entropy_with_logits loss got deprecated. The only difference between them is that in a newer version, backpropagation happens into both logits and labels (here's a discussion why this may be useful).

\n\n

Sparse functions family

\n\n\n\n

Like ordinary softmax above, these loss functions should be used for\nmultinomial mutually exclusive classification, i.e. pick one out of N classes.\nThe difference is in labels encoding: the classes are specified as integers (class index),\nnot one-hot vectors. Obviously, this doesn't allow soft classes, but it\ncan save some memory when there are thousands or millions of classes.\nHowever, note that logits argument must still contain logits per each class,\nthus it consumes at least [batch_size, classes] memory.

\n\n

Like above, tf.losses version has a weights argument which allows\nto set the in-batch weights.

\n\n

Sampled softmax functions family

\n\n\n\n

These functions provide another alternative for dealing with huge number of classes.\nInstead of computing and comparing an exact probability distribution, they compute\na loss estimate from a random sample.

\n\n

The arguments weights and biases specify a separate fully-connected layer that\nis used to compute the logits for a chosen sample.

\n\n

Like above, labels are not one-hot encoded, but have the shape [batch_size, num_true].

\n\n

Sampled functions are only suitable for training. In test time, it's recommended to\nuse a standard softmax loss (either sparse or one-hot) to get an actual distribution.

\n\n

Another alternative loss is tf.nn.nce_loss, which performs noise-contrastive estimation (if you're interested, see this very detailed discussion). I've included this function to the softmax family, because NCE guarantees approximation to softmax in the limit.

\n", "system": "" }, { "instruction": "How to add regularizations in TensorFlow?", "input": "", "output": "

As you say in the second point, using the regularizer argument is the recommended way. You can use it in get_variable, or set it once in your variable_scope and have all your variables regularized.

\n

The losses are collected in the graph, and you need to manually add them to your cost function like this.

\n
  reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)\n  reg_constant = 0.01  # Choose an appropriate one.\n  loss = my_normal_loss + reg_constant * sum(reg_losses)\n
\n", "system": "" }, { "instruction": "Meaning of inter_op_parallelism_threads and intra_op_parallelism_threads", "input": "", "output": "

The inter_op_parallelism_threads and intra_op_parallelism_threads options are documented in the source of the tf.ConfigProto protocol buffer. These options configure two thread pools used by TensorFlow to parallelize execution, as the comments describe:

\n
// The execution of an individual op (for some op types) can be\n// parallelized on a pool of intra_op_parallelism_threads.\n// 0 means the system picks an appropriate number.\nint32 intra_op_parallelism_threads = 2;\n\n// Nodes that perform blocking operations are enqueued on a pool of\n// inter_op_parallelism_threads available in each process.\n//\n// 0 means the system picks an appropriate number.\n//\n// Note that the first Session created in the process sets the\n// number of threads for all future sessions unless use_per_session_threads is\n// true or session_inter_op_thread_pool is configured.\nint32 inter_op_parallelism_threads = 5;\n
\n

There are several possible forms of parallelism when running a TensorFlow graph, and these options provide some control multi-core CPU parallelism:

\n\n

Finally, both configuration options take a default value of 0, which means "the system picks an appropriate number." Currently, this means that each thread pool will have one thread per CPU core in your machine.

\n", "system": "" }, { "instruction": "How do display different runs in TensorBoard?", "input": "", "output": "

In addition to TensorBoard scanning subdirectories (so you can pass a directory containing the directories with your runs), you can also pass multiple directories to TensorBoard explicitly and give custom names (example taken from the --help output):

\n
tensorboard --logdir=name1:/path/to/logs/1,name2:/path/to/logs/2\n
\n

More information can be found at the TensorBoard documentation.

\n

In recent versions of TensorBoard, aliasing this way requires a different argument, however its use is discouraged (quote from current documentation on github - linked above):

\n
\n

Logdir & Logdir_spec (Legacy Mode)

\n

You may also pass a comma separated list of log directories, and\nTensorBoard will watch each directory. You can also assign names to\nindividual log directories by putting a colon between the name and the\npath, as in

\n

tensorboard --logdir_spec name1:/path/to/logs/1,name2:/path/to/logs/2

\n

This flag (--logdir_spec) is discouraged and can usually be avoided.\nTensorBoard walks log directories recursively; for finer-grained\ncontrol, prefer using a symlink tree. Some features may not work when\nusing --logdir_spec instead of --logdir.

\n
\n", "system": "" }, { "instruction": "How to remove cuda completely from ubuntu?", "input": "", "output": "

From cuda 11.4 onwards, an uninstaller script has been provided. Use it for the uninstallation:

\n
# To uninstall cuda\nsudo /usr/local/cuda-11.4/bin/cuda-uninstaller \n# To uninstall nvidia\nsudo /usr/bin/nvidia-uninstall\n
\n

If you are using cuda 11.3 or earlier refer to the section below for how to remove it.

\n

If you get the problem of broken packages, it has happened since you added repo to the apt/sources.lst. Run the following to delete it:

\n
sudo vim /etc/apt/sources.list\n
\n

Go to the line containing reference to Nvidia repo and comment it by appending # in front of the line, for e.g.:

\n
#deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /\n
\n

Then run

\n
sudo apt-get update \n
\n

This will fix the problem.

\n

Removing Cuda 11.3 and earlier

\n

There are two things- nvidia drivers and cuda toolkit- which you may want to remove.\nIf you have installed using apt-get use the following to remove the packages completely from the system:

\n

To remove cuda toolkit:

\n
sudo apt-get --purge remove "*cublas*" "cuda*" "nsight*" \n
\n

To remove Nvidia drivers:

\n
sudo apt-get --purge remove "*nvidia*"\n
\n

If you have installed via source files (assuming the default location to be /usr/local) then remove it using:

\n
sudo rm -rf /usr/local/cuda*\n
\n

References: Nvidia uninstallation

\n", "system": "" }, { "instruction": "What is the difference between Dataset.from_tensors and Dataset.from_tensor_slices?", "input": "", "output": "

from_tensors combines the input and returns a dataset with a single element:

\n
>>> t = tf.constant([[1, 2], [3, 4]])\n>>> ds = tf.data.Dataset.from_tensors(t)\n>>> [x for x in ds]\n[<tf.Tensor: shape=(2, 2), dtype=int32, numpy=\n array([[1, 2],\n        [3, 4]], dtype=int32)>]\n
\n

from_tensor_slices creates a dataset with a separate element for each row of the input tensor:

\n
>>> t = tf.constant([[1, 2], [3, 4]])\n>>> ds = tf.data.Dataset.from_tensor_slices(t)\n>>> [x for x in ds]\n[<tf.Tensor: shape=(2,), dtype=int32, numpy=array([1, 2], dtype=int32)>,\n <tf.Tensor: shape=(2,), dtype=int32, numpy=array([3, 4], dtype=int32)>]\n
\n", "system": "" }, { "instruction": "Higher validation accuracy, than training accurracy using Tensorflow and Keras", "input": "", "output": "

This happens when you use Dropout, since the behaviour when training and testing are different.

\n\n

When training, a percentage of the features are set to zero (50% in your case since you are using Dropout(0.5)). When testing, all features are used (and are scaled appropriately). So the model at test time is more robust - and can lead to higher testing accuracies.

\n", "system": "" }, { "instruction": "Using Keras & Tensorflow with AMD GPU", "input": "", "output": "

I'm writing an OpenCL 1.2 backend for Tensorflow at https://github.com/hughperkins/tensorflow-cl

\n\n

This fork of tensorflow for OpenCL has the following characteristics:

\n\n\n\n

This is not the only OpenCL fork of Tensorflow available. There is also a fork being developed by Codeplay https://www.codeplay.com , using Computecpp, https://www.codeplay.com/products/computesuite/computecpp Their fork has stronger requirements than my own, as far as I know, in terms of which specific GPU devices it works on. You would need to check the Platform Support Notes (at the bottom of hte computecpp page), to determine whether your device is supported. The codeplay fork is actually an official Google fork, which is here: https://github.com/benoitsteiner/tensorflow-opencl

\n", "system": "" }, { "instruction": "How can I use a pre-trained neural network with grayscale images?", "input": "", "output": "

The model's architecture cannot be changed because the weights have been trained for a specific input configuration. Replacing the first layer with your own would pretty much render the rest of the weights useless.

\n\n

-- Edit: elaboration suggested by Prune--
\nCNNs are built so that as they go deeper, they can extract high-level features derived from the lower-level features that the previous layers extracted. By removing the initial layers of a CNN, you are destroying that hierarchy of features because the subsequent layers won't receive the features that they are supposed to as their input. In your case the second layer has been trained to expect the features of the first layer. By replacing your first layer with random weights, you are essentially throwing away any training that has been done on the subsequent layers, as they would need to be retrained. I doubt that they could retain any of the knowledge learned during the initial training.
\n--- end edit ---

\n\n

There is an easy way, though, which you can make your model work with grayscale images. You just need to make the image to appear to be RGB. The easiest way to do so is to repeat the image array 3 times on a new dimension. Because you will have the same image over all 3 channels, the performance of the model should be the same as it was on RGB images.

\n\n

In numpy this can be easily done like this:

\n\n
print(grayscale_batch.shape)  # (64, 224, 224)\nrgb_batch = np.repeat(grayscale_batch[..., np.newaxis], 3, -1)\nprint(rgb_batch.shape)  # (64, 224, 224, 3)\n
\n\n

The way this works is that it first creates a new dimension (to place the channels) and then it repeats the existing array 3 times on this new dimension.

\n\n

I'm also pretty sure that keras' ImageDataGenerator can load grayscale images as RGB.

\n", "system": "" }, { "instruction": "What is the purpose of the Tensorflow Gradient Tape?", "input": "", "output": "

With eager execution enabled, Tensorflow will calculate the values of tensors as they occur in your code. This means that it won't precompute a static graph for which inputs are fed in through placeholders. This means to back propagate errors, you have to keep track of the gradients of your computation and then apply these gradients to an optimiser.

\n\n

This is very different from running without eager execution, where you would build a graph and then simply use sess.run to evaluate your loss and then pass this into an optimiser directly.

\n\n

Fundamentally, because tensors are evaluated immediately, you don't have a graph to calculate gradients and so you need a gradient tape. It is not so much that it is just used for visualisation, but more that you cannot implement a gradient descent in eager mode without it.

\n\n

Obviously, Tensorflow could just keep track of every gradient for every computation on every tf.Variable. However, that could be a huge performance bottleneck. They expose a gradient tape so that you can control what areas of your code need the gradient information. Note that in non-eager mode, this will be statically determined based on the computational branches that are descendants of your loss but in eager mode there is no static graph and so no way of knowing.

\n", "system": "" }, { "instruction": "On Windows, running "import tensorflow" generates No module named "_pywrap_tensorflow" error", "input": "", "output": "

The problem was the cuDNN Library for me - for whatever reason cudnn-8.0-windows10-x64-v6.0 was NOT working - I used cudnn-8.0-windows10-x64-v5.1 - ALL GOOD!

\n\n

My setup working with Win10 64 and the Nvidia GTX780M:

\n\n\n\n

If you run Windows 32 be sure to get the 32 bit versions of the files mentioned above.

\n", "system": "" }, { "instruction": "How to do Xavier initialization on TensorFlow", "input": "", "output": "

Since version 0.8 there is a Xavier initializer, see here for the docs.

\n\n

You can use something like this:

\n\n
W = tf.get_variable(\"W\", shape=[784, 256],\n           initializer=tf.contrib.layers.xavier_initializer())\n
\n", "system": "" }, { "instruction": "How to assign a value to a TensorFlow variable?", "input": "", "output": "

In TF1, the statement x.assign(1) does not actually assign the value 1 to x, but rather creates a tf.Operation that you have to explicitly run to update the variable.* A call to Operation.run() or Session.run() can be used to run the operation:

\n\n
assign_op = x.assign(1)\nsess.run(assign_op)  # or `assign_op.op.run()`\nprint(x.eval())\n# ==> 1\n
\n\n

(* In fact, it returns a tf.Tensor, corresponding to the updated value of the variable, to make it easier to chain assignments.)

\n\n

However, in TF2 x.assign(1) will now assign the value eagerly:

\n\n
x.assign(1)\nprint(x.numpy())\n# ==> 1\n
\n", "system": "" }, { "instruction": "How to *actually* read CSV data in TensorFlow?", "input": "", "output": "

I think you are mixing up imperative and graph-construction parts here. The operation tf.train.shuffle_batch creates a new queue node, and a single node can be used to process the entire dataset. So I think you are hanging because you created a bunch of shuffle_batch queues in your for loop and didn't start queue runners for them.

\n\n

Normal input pipeline usage looks like this:

\n\n
    \n
  1. Add nodes like shuffle_batch to input pipeline
  2. \n
  3. (optional, to prevent unintentional graph modification) finalize graph
  4. \n
\n\n

--- end of graph construction, beginning of imperative programming --

\n\n
    \n
  1. tf.start_queue_runners
  2. \n
  3. while(True): session.run()
  4. \n
\n\n

To be more scalable (to avoid Python GIL), you could generate all of your data using TensorFlow pipeline. However, if performance is not critical, you can hook up a numpy array to an input pipeline by using slice_input_producer. Here's an example with some Print nodes to see what's going on (messages in Print go to stdout when node is run)

\n\n
tf.reset_default_graph()\n\nnum_examples = 5\nnum_features = 2\ndata = np.reshape(np.arange(num_examples*num_features), (num_examples, num_features))\nprint data\n\n(data_node,) = tf.slice_input_producer([tf.constant(data)], num_epochs=1, shuffle=False)\ndata_node_debug = tf.Print(data_node, [data_node], \"Dequeueing from data_node \")\ndata_batch = tf.batch([data_node_debug], batch_size=2)\ndata_batch_debug = tf.Print(data_batch, [data_batch], \"Dequeueing from data_batch \")\n\nsess = tf.InteractiveSession()\nsess.run(tf.initialize_all_variables())\ntf.get_default_graph().finalize()\ntf.start_queue_runners()\n\ntry:\n  while True:\n    print sess.run(data_batch_debug)\nexcept tf.errors.OutOfRangeError as e:\n  print \"No more inputs.\"\n
\n\n

You should see something like this

\n\n
[[0 1]\n [2 3]\n [4 5]\n [6 7]\n [8 9]]\n[[0 1]\n [2 3]]\n[[4 5]\n [6 7]]\nNo more inputs.\n
\n\n

The \"8, 9\" numbers didn't fill up the full batch, so they didn't get produced. Also tf.Print are printed to sys.stdout, so they show up in separately in Terminal for me.

\n\n

PS: a minimal of connecting batch to a manually initialized queue is in github issue 2193

\n\n

Also, for debugging purposes you might want to set timeout on your session so that your IPython notebook doesn't hang on empty queue dequeues. I use this helper function for my sessions

\n\n
def create_session():\n  config = tf.ConfigProto(log_device_placement=True)\n  config.gpu_options.per_process_gpu_memory_fraction=0.3 # don't hog all vRAM\n  config.operation_timeout_in_ms=60000   # terminate on long hangs\n  # create interactive session to register a default session\n  sess = tf.InteractiveSession(\"\", config=config)\n  return sess\n
\n\n

Scalability Notes:

\n\n
    \n
  1. tf.constant inlines copy of your data into the Graph. There's a fundamental limit of 2GB on size of Graph definition so that's an upper limit on size of data
  2. \n
  3. You could get around that limit by using v=tf.Variable and saving the data into there by running v.assign_op with a tf.placeholder on right-hand side and feeding numpy array to the placeholder (feed_dict)
  4. \n
  5. That still creates two copies of data, so to save memory you could make your own version of slice_input_producer which operates on numpy arrays, and uploads rows one at a time using feed_dict
  6. \n
\n", "system": "" }, { "instruction": "In Keras, what exactly am I configuring when I create a stateful `LSTM` layer with N `units`?", "input": "", "output": "

You can check this question for further information, although it is based on Keras-1.x API.

\n\n

Basically, the unit means the dimension of the inner cells in LSTM. Because in LSTM, the dimension of inner cell (C_t and C_{t-1} in the graph), output mask (o_t in the graph) and hidden/output state (h_t in the graph) should have the SAME dimension, therefore you output's dimension should be unit-length as well.

\n\n

And LSTM in Keras only define exactly one LSTM block, whose cells is of unit-length. If you set return_sequence=True, it will return something with shape: (batch_size, timespan, unit). If false, then it just return the last output in shape (batch_size, unit).

\n\n

As for the input, you should provide input for every timestamp. Basically, the shape is like (batch_size, timespan, input_dim), where input_dim can be different from the unit. If you just want to provide input at the first step, you can simply pad your data with zeros at other time steps.

\n", "system": "" }, { "instruction": "RuntimeError: tf.placeholder() is not compatible with eager execution", "input": "", "output": "

I found an easy solution here: disable Tensorflow eager execution

\n\n

Basicaly it is:

\n\n

tf.compat.v1.disable_eager_execution()

\n\n

With this, you disable the default activate eager execution and you don't need to touch the code much more.

\n", "system": "" }, { "instruction": "Making predictions with a TensorFlow model", "input": "", "output": "

In the \"Deep MNIST for Experts\" example, see this line:

\n\n
\n

We can now implement our regression model. It only takes one line! We\n multiply the vectorized input images x by the weight matrix W, add the\n bias b, and compute the softmax probabilities that are assigned to\n each class.

\n\n
y = tf.nn.softmax(tf.matmul(x,W) + b)\n
\n
\n\n

Just pull on node y and you'll have what you want.

\n\n
feed_dict = {x: [your_image]}\nclassification = tf.run(y, feed_dict)\nprint classification\n
\n\n

This applies to just about any model you create - you'll have computed the prediction probabilities as one of the last steps before computing the loss.

\n", "system": "" }, { "instruction": "How to inspect a Tensorflow .tfrecord file?", "input": "", "output": "

Found it!

\n\n
import tensorflow as tf\n\nfor example in tf.python_io.tf_record_iterator(\"data/foobar.tfrecord\"):\n    print(tf.train.Example.FromString(example))\n
\n\n

You can also add:

\n\n
from google.protobuf.json_format import MessageToJson\n...\njsonMessage = MessageToJson(tf.train.Example.FromString(example))\n
\n", "system": "" }, { "instruction": "Clearing Tensorflow GPU memory after model execution", "input": "", "output": "

You can use numba library to release all the gpu memory

\n
pip install numba \n
\n
from numba import cuda \ndevice = cuda.get_current_device()\ndevice.reset()\n
\n

This will release all the memory

\n", "system": "" }, { "instruction": "How do I disable TensorFlow's eager execution?", "input": "", "output": "

Assume you are using Tensorflow 2.0 preview release which has eager execution enabled by default. There is a disable_eager_execution() in v1 API, which you can put in the front of your code like:

\n
import tensorflow as tf\n    \ntf.compat.v1.disable_eager_execution()\n
\n

On the other hand, if you are not using 2.0 preview, please check if you accidentally enabled eager execution somewhere.

\n", "system": "" }, { "instruction": "How to import keras from tf.keras in Tensorflow?", "input": "", "output": "

Use the keras module from tensorflow like this:

\n\n

import tensorflow as tf

\n\n

Import classes

\n\n

from tensorflow.python.keras.layers import Input, Dense

\n\n

or use directly

\n\n

dense = tf.keras.layers.Dense(...)

\n\n

EDIT Tensorflow 2

\n\n

from tensorflow.keras.layers import Input, Dense

\n\n

and the rest stays the same.

\n", "system": "" }, { "instruction": "Failed to get convolution algorithm. This is probably because cuDNN failed to initialize,", "input": "", "output": "

I've seen this error message for three different reasons, with different solutions:

\n

1. You have cache issues

\n

I regularly work around this error by shutting down my python process, removing the ~/.nv directory (on linux, rm -rf ~/.nv), and restarting the Python process. I don't exactly know why this works. It's probably at least partly related to the second option:

\n

2. You're out of memory

\n

The error can also show up if you run out of graphics card RAM. With an nvidia GPU you can check graphics card memory usage with nvidia-smi. This will give you a readout of how much GPU RAM you have in use (something like 6025MiB / 6086MiB if you're almost at the limit) as well as a list of what processes are using GPU RAM.

\n

If you've run out of RAM, you'll need to restart the process (which should free up the RAM) and then take a less memory-intensive approach. A few options are:

\n\n
import keras\nimport tensorflow as tf\n\nconfig = tf.ConfigProto()\nconfig.gpu_options.per_process_gpu_memory_fraction = 0.9  # 0.6 sometimes works better for folks\nkeras.backend.tensorflow_backend.set_session(tf.Session(config=config))\n
\n

This can slow down your model evaluation if not used together with the items above, presumably since the large data set will have to be swapped in and out to fit into the small amount of memory you've allocated.

\n

A second option is to have TensorFlow start out using only a minimum amount of memory and then allocate more as needed (documented here):

\n
os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true'\n
\n

3. You have incompatible versions of CUDA, TensorFlow, NVIDIA drivers, etc.

\n

If you've never had similar models working, you're not running out of VRAM and your cache is clean, I'd go back and set up CUDA + TensorFlow using the best available installation guide - I have had the most success with following the instructions at https://www.tensorflow.org/install/gpu rather than those on the NVIDIA / CUDA site. Lambda Stack is also a good way to go.

\n", "system": "" }, { "instruction": "NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0) to a numpy array", "input": "", "output": "

For me, the issue occurred when upgrading from numpy 1.19 to 1.20 and using ray's RLlib, which uses tensorflow 2.2 internally.\nSimply downgrading with

\n
pip install numpy==1.19.5\n
\n

solved the problem; the error did not occur anymore.

\n

Update (comment by @codeananda): You can also update to a newer TensorFlow (2.6+) version now that resolves the problem (pip install -U tensorflow).

\n", "system": "" }, { "instruction": "Get the value of some weights in a model trained by TensorFlow", "input": "", "output": "

In TensorFlow, trained weights are represented by tf.Variable objects. If you created a tf.Variable—e.g. called v—yourself, you can get its value as a NumPy array by calling sess.run(v) (where sess is a tf.Session).

\n\n

If you do not currently have a pointer to the tf.Variable, you can get a list of the trainable variables in the current graph by calling tf.trainable_variables(). This function returns a list of all trainable tf.Variable objects in the current graph, and you can select the one that you want by matching the v.name property. For example:

\n\n
# Desired variable is called \"tower_2/filter:0\".\nvar = [v for v in tf.trainable_variables() if v.name == \"tower_2/filter:0\"][0]\n
\n", "system": "" }, { "instruction": "How could I use batch normalization in TensorFlow?", "input": "", "output": "

Update July 2016 The easiest way to use batch normalization in TensorFlow is through the higher-level interfaces provided in either contrib/layers, tflearn, or slim.

\n\n

Previous answer if you want to DIY:\nThe documentation string for this has improved since the release - see the docs comment in the master branch instead of the one you found. It clarifies, in particular, that it's the output from tf.nn.moments.

\n\n

You can see a very simple example of its use in the batch_norm test code. For a more real-world use example, I've included below the helper class and use notes that I scribbled up for my own use (no warranty provided!):

\n\n
\"\"\"A helper class for managing batch normalization state.                   \n\nThis class is designed to simplify adding batch normalization               \n(http://arxiv.org/pdf/1502.03167v3.pdf) to your model by                    \nmanaging the state variables associated with it.                            \n\nImportant use note:  The function get_assigner() returns                    \nan op that must be executed to save the updated state.                      \nA suggested way to do this is to make execution of the                      \nmodel optimizer force it, e.g., by:                                         \n\n  update_assignments = tf.group(bn1.get_assigner(),                         \n                                bn2.get_assigner())                         \n  with tf.control_dependencies([optimizer]):                                \n    optimizer = tf.group(update_assignments)                                \n\n\"\"\"\n\nimport tensorflow as tf\n\n\nclass ConvolutionalBatchNormalizer(object):\n  \"\"\"Helper class that groups the normalization logic and variables.        \n\n  Use:                                                                      \n      ewma = tf.train.ExponentialMovingAverage(decay=0.99)                  \n      bn = ConvolutionalBatchNormalizer(depth, 0.001, ewma, True)           \n      update_assignments = bn.get_assigner()                                \n      x = bn.normalize(y, train=training?)                                  \n      (the output x will be batch-normalized).                              \n  \"\"\"\n\n  def __init__(self, depth, epsilon, ewma_trainer, scale_after_norm):\n    self.mean = tf.Variable(tf.constant(0.0, shape=[depth]),\n                            trainable=False)\n    self.variance = tf.Variable(tf.constant(1.0, shape=[depth]),\n                                trainable=False)\n    self.beta = tf.Variable(tf.constant(0.0, shape=[depth]))\n    self.gamma = tf.Variable(tf.constant(1.0, shape=[depth]))\n    self.ewma_trainer = ewma_trainer\n    self.epsilon = epsilon\n    self.scale_after_norm = scale_after_norm\n\n  def get_assigner(self):\n    \"\"\"Returns an EWMA apply op that must be invoked after optimization.\"\"\"\n    return self.ewma_trainer.apply([self.mean, self.variance])\n\n  def normalize(self, x, train=True):\n    \"\"\"Returns a batch-normalized version of x.\"\"\"\n    if train:\n      mean, variance = tf.nn.moments(x, [0, 1, 2])\n      assign_mean = self.mean.assign(mean)\n      assign_variance = self.variance.assign(variance)\n      with tf.control_dependencies([assign_mean, assign_variance]):\n        return tf.nn.batch_norm_with_global_normalization(\n            x, mean, variance, self.beta, self.gamma,\n            self.epsilon, self.scale_after_norm)\n    else:\n      mean = self.ewma_trainer.average(self.mean)\n      variance = self.ewma_trainer.average(self.variance)\n      local_beta = tf.identity(self.beta)\n      local_gamma = tf.identity(self.gamma)\n      return tf.nn.batch_norm_with_global_normalization(\n          x, mean, variance, local_beta, local_gamma,\n          self.epsilon, self.scale_after_norm)\n
\n\n

Note that I called it a ConvolutionalBatchNormalizer because it pins the use of tf.nn.moments to sum across axes 0, 1, and 2, whereas for non-convolutional use you might only want axis 0.

\n\n

Feedback appreciated if you use it.

\n", "system": "" }, { "instruction": "tensorflow:AttributeError: 'module' object has no attribute 'mul'", "input": "", "output": "

According to the tensorflow 1.0.0 release notes,

\n\n
\n

tf.mul, tf.sub and tf.neg are deprecated in favor of tf.multiply, tf.subtract and tf.negative.

\n
\n\n

You'll need to replace tf.mul with tf.multiply.

\n", "system": "" }, { "instruction": "What is the default kernel initializer in tf.layers.conv2d and tf.layers.dense?", "input": "", "output": "\n\n

Great question! It is quite a trick to find out!

\n\n\n\n

In code:

\n\n
self.kernel = vs.get_variable('kernel',\n                                  shape=kernel_shape,\n                                  initializer=self.kernel_initializer,\n                                  regularizer=self.kernel_regularizer,\n                                  trainable=True,\n                                  dtype=self.dtype)\n
\n\n

Next step: what does the variable scope do when the initializer is None?

\n\n

Here it says:

\n\n
\n

If initializer is None (the default), the default initializer passed in\n the constructor is used. If that one is None too, we use a new\n glorot_uniform_initializer.

\n
\n\n

So the answer is: it uses the glorot_uniform_initializer

\n\n

For completeness the definition of this initializer:

\n\n
\n

The Glorot uniform initializer, also called Xavier uniform initializer.\n It draws samples from a uniform distribution within [-limit, limit]\n where limit is sqrt(6 / (fan_in + fan_out))\n where fan_in is the number of input units in the weight tensor\n and fan_out is the number of output units in the weight tensor.\n Reference: http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf

\n
\n\n

Edit: this is what I found in the code and documentation. Perhaps you could verify that the initialization looks like this by running eval on the weights!

\n", "system": "" }, { "instruction": "What is the proper way to install TensorFlow on Apple M1 in 2022", "input": "", "output": "

Conda Environment YAMLs

\n

TensorFlow 2.13+

\n

Distilling the official directions from Apple (as of 24 November 2024), one would create an environment using the following YAML:

\n

tf-metal-arm64.yaml

\n
name: tf-metal\nchannels:\n  - conda-forge\n  - nodefaults\ndependencies:\n  - python=3.11  ## specify desired version\n  - pip\n\n  ## uncomment for use with Jupyter\n  ## - ipykernel\n\n  ## PyPI packages\n  - pip:\n    - tensorflow\n    - tensorflow-metal\n
\n

TensorFlow <= 2.12 (original directions)

\n

Distilling the official directions from Apple (as of 13 July 2022), one would create an environment using the following YAML:

\n

tf-metal-arm64.yaml

\n
name: tf-metal\nchannels:\n  - apple\n  - conda-forge\ndependencies:\n  - python=3.9  ## specify desired version\n  - pip\n  - tensorflow-deps\n\n  ## uncomment for use with Jupyter\n  ## - ipykernel\n\n  ## PyPI packages\n  - pip:\n    - tensorflow-macos\n    - tensorflow-metal  ## optional, but recommended\n
\n

Edit to include additional packages.

\n

Creating environment

\n

Before creating the environment we need to know what the base architecture is. Check this with conda config --show subdir.

\n

Native (osx-arm64) base

\n

If you have installed a native osx-arm64 Miniforge variant, then you can create with:

\n
conda env create -n my_tf_env -f tf-metal-arm64.yaml\n
\n

Emulated (osx-64) base

\n

If you do not have a native base, then you will need to override the subdir setting:

\n
## create env\nCONDA_SUBDIR=osx-arm64 conda env create -n my_tf_env -f tf-metal-arm64.yaml\n\n## activate\nconda activate my_tf_env\n\n## permanently set the subdir\nconda config --env --set subdir osx-arm64\n
\n

Be sure to always activate the environment before installing or updating packages.

\n", "system": "" }, { "instruction": "Can I measure the execution time of individual operations with TensorFlow?", "input": "", "output": "

I have used the Timeline object to get the time of execution for each node in the graph:

\n\n\n\n
\n\n

Here is an example program that measures the performance of a matrix multiplication:

\n\n
import tensorflow as tf\nfrom tensorflow.python.client import timeline\n\nx = tf.random_normal([1000, 1000])\ny = tf.random_normal([1000, 1000])\nres = tf.matmul(x, y)\n\n# Run the graph with full trace option\nwith tf.Session() as sess:\n    run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)\n    run_metadata = tf.RunMetadata()\n    sess.run(res, options=run_options, run_metadata=run_metadata)\n\n    # Create the Timeline object, and write it to a json\n    tl = timeline.Timeline(run_metadata.step_stats)\n    ctf = tl.generate_chrome_trace_format()\n    with open('timeline.json', 'w') as f:\n        f.write(ctf)\n
\n\n
\n\n

You can then open Google Chrome, go to the page chrome://tracing and load the timeline.json file.\nYou should see something like:

\n\n

\"timeline\"

\n", "system": "" }, { "instruction": "What's the difference between tf.Session() and tf.InteractiveSession()?", "input": "", "output": "

Mainly taken from official documentation:

\n\n
\n

The only difference with a regular Session is that an InteractiveSession installs itself as the default session on construction. The methods Tensor.eval() and Operation.run() will use that session to run ops.

\n
\n\n

This allows to use interactive context, like shell, as it avoids having to pass an explicit Session object to run op:

\n\n
sess = tf.InteractiveSession()\na = tf.constant(5.0)\nb = tf.constant(6.0)\nc = a * b\n# We can just use 'c.eval()' without passing 'sess'\nprint(c.eval())\nsess.close()\n
\n\n

It is also possible to say, that InteractiveSession supports less typing, as allows to run variables without needing to constantly refer to the session object.

\n", "system": "" }, { "instruction": "Gradient Descent vs Adagrad vs Momentum in TensorFlow", "input": "", "output": "

Here is a brief explanation based on my understanding:

\n\n\n\n

I would say that SGD, Momentum and Nesterov are inferior than the last 3.

\n", "system": "" }, { "instruction": "Tensorflow: None of the MLIR optimization passes are enabled (registered 1)", "input": "", "output": "

MLIR is being used as another solution to implementing and optimizing Tensorflow logic. This informative message is benign and is saying MLIR was not being used. This is expected as in TF 2.3, the MLIR based implementation is still being developed and proven, so end users are generally not expected to use the MLIR implementation and are instead expected to use the non-MLIR feature complete implementation.

\n

Update: still experimental on version 2.9.1. On the docs it is written:

\n
\n

DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT.

\n
\n", "system": "" }, { "instruction": "AttributeError: module 'tensorflow' has no attribute 'ConfigProto'", "input": "", "output": "

ConfigProto disappeared in tf 2.0, so an elegant solution is:

\n\n
import tensorflow as tf\n
\n\n

and then replace:

\n\n

tf.ConfigProto by tf.compat.v1.ConfigProto

\n\n

In fact, the compatibility built in 2.0 to get tf 1.XX: tf.compat.v1 is really helpful.

\n\n

Useful link: \nMigrate your tensorflow 1. code to tensorflow 2.:\nhttps://www.tensorflow.org/guide/migrate

\n", "system": "" }, { "instruction": "Installing Python3.6 alongside Python3.7 on Mac", "input": "", "output": "

Try using brew for example if already using Python 3:

\n\n
$ brew unlink python\n
\n\n

Then install python 3.6.5:

\n\n
$ brew install --ignore-dependencies https://raw.githubusercontent.com/Homebrew/homebrew-core/f2a764ef944b1080be64bd88dca9a1d80130c558/Formula/python.rb\n
\n\n

To get back to python 3.7.4_1 use:

\n\n
$ brew switch python 3.7.4_1\n
\n\n

And if need 3.6 again switch with:

\n\n
$ brew switch python 3.6.5_1\n
\n", "system": "" }, { "instruction": "Tensorflow: How do I convert a EagerTensor into a numpy array?", "input": "", "output": "

There is a .numpy() function which you can use, alternatively you could also do numpy.array(y). For example:

\n
import tensorflow as tf\nimport numpy as np\n\ntf.enable_eager_execution()\n\nx = tf.constant([1., 2.])\nprint(type(x))            # <type 'EagerTensor'>\nprint(type(x.numpy()))    # <type 'numpy.ndarray'>\nprint(type(np.array(x)))  # <type 'numpy.ndarray'>\n
\n

See the section in the eager execution guide.

\n", "system": "" }, { "instruction": "How to count total number of trainable parameters in a tensorflow model?", "input": "", "output": "

Loop over the shape of every variable in tf.trainable_variables().

\n\n
total_parameters = 0\nfor variable in tf.trainable_variables():\n    # shape is an array of tf.Dimension\n    shape = variable.get_shape()\n    print(shape)\n    print(len(shape))\n    variable_parameters = 1\n    for dim in shape:\n        print(dim)\n        variable_parameters *= dim.value\n    print(variable_parameters)\n    total_parameters += variable_parameters\nprint(total_parameters)\n
\n\n

Update: I wrote an article to clarify the dynamic/static shapes in Tensorflow because of this answer: https://pgaleone.eu/tensorflow/2018/07/28/understanding-tensorflow-tensors-shape-static-dynamic/

\n", "system": "" }, { "instruction": "TensorFlow: InternalError: Blas SGEMM launch failed", "input": "", "output": "

Old question, but may help others.
\nTry to close interactive sessions active in other processes (if IPython Notebook - just restart kernels). This helped me!

\nAdditionally, I use this code to close local sessions in this kernel during experiments:

\n\n
if 'session' in locals() and session is not None:\n    print('Close interactive session')\n    session.close()\n
\n", "system": "" }, { "instruction": "Tensorflow Tensorboard default port", "input": "", "output": "

In fact there is an option to change the default port ...

\n\n
tensorboard --logdir=/tmp  --port=8008\n
\n", "system": "" }, { "instruction": "FailedPreconditionError: Attempting to use uninitialized in Tensorflow", "input": "", "output": "

The FailedPreconditionError arises because the program is attempting to read a variable (named \"Variable_1\") before it has been initialized. In TensorFlow, all variables must be explicitly initialized, by running their \"initializer\" operations. For convenience, you can run all of the variable initializers in the current session by executing the following statement before your training loop:

\n\n
tf.initialize_all_variables().run()\n
\n\n

Note that this answer assumes that, as in the question, you are using tf.InteractiveSession, which allows you to run operations without specifying a session. For non-interactive uses, it is more common to use tf.Session, and initialize as follows:

\n\n
init_op = tf.initialize_all_variables()\n\nsess = tf.Session()\nsess.run(init_op)\n
\n", "system": "" }, { "instruction": "How to export Keras .h5 to tensorflow .pb?", "input": "", "output": "

Keras does not include by itself any means to export a TensorFlow graph as a protocol buffers file, but you can do it using regular TensorFlow utilities. Here is a blog post explaining how to do it using the utility script freeze_graph.py included in TensorFlow, which is the \"typical\" way it is done.

\n\n

However, I personally find a nuisance having to make a checkpoint and then run an external script to obtain a model, and instead prefer to do it from my own Python code, so I use a function like this:

\n\n
def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):\n    \"\"\"\n    Freezes the state of a session into a pruned computation graph.\n\n    Creates a new computation graph where variable nodes are replaced by\n    constants taking their current value in the session. The new graph will be\n    pruned so subgraphs that are not necessary to compute the requested\n    outputs are removed.\n    @param session The TensorFlow session to be frozen.\n    @param keep_var_names A list of variable names that should not be frozen,\n                          or None to freeze all the variables in the graph.\n    @param output_names Names of the relevant graph outputs.\n    @param clear_devices Remove the device directives from the graph for better portability.\n    @return The frozen graph definition.\n    \"\"\"\n    graph = session.graph\n    with graph.as_default():\n        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))\n        output_names = output_names or []\n        output_names += [v.op.name for v in tf.global_variables()]\n        input_graph_def = graph.as_graph_def()\n        if clear_devices:\n            for node in input_graph_def.node:\n                node.device = \"\"\n        frozen_graph = tf.graph_util.convert_variables_to_constants(\n            session, input_graph_def, output_names, freeze_var_names)\n        return frozen_graph\n
\n\n

Which is inspired in the implementation of freeze_graph.py. The parameters are similar to the script too. session is the TensorFlow session object. keep_var_names is only needed if you want to keep some variable not frozen (e.g. for stateful models), so generally not. output_names is a list with the names of the operations that produce the outputs that you want. clear_devices just removes any device directives to make the graph more portable. So, for a typical Keras model with one output, you would do something like:

\n\n
from keras import backend as K\n\n# Create, compile and train model...\n\nfrozen_graph = freeze_session(K.get_session(),\n                              output_names=[out.op.name for out in model.outputs])\n
\n\n

Then you can write the graph to a file as usual with tf.train.write_graph:

\n\n
tf.train.write_graph(frozen_graph, \"some_directory\", \"my_model.pb\", as_text=False)\n
\n", "system": "" }, { "instruction": ""synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'." problem in TensorFlow", "input": "", "output": "

If you're using TF 2.0 a quick solution would be to downgrade your numpy to 1.16.4. (I used 1.17 and received the same warning messages).

\n\n
1. pip uninstall numpy \n2. pip install numpy==1.16.4\n
\n\n

See here (thanks to ymodak)

\n", "system": "" }, { "instruction": "pip3: command not found", "input": "", "output": "

You would need to install pip3.

\n

On Linux, run first sudo apt update. Then the command would be: sudo apt install python3-pip

\nOn Mac, using brew, first brew install python3
\nThen brew postinstall python3

\n

Try calling pip3 -V to see if it worked.

\n", "system": "" }, { "instruction": "Keras model.summary() object to string", "input": "", "output": "

With my version of Keras (2.0.6) and Python (3.5.0), this works for me:

\n\n
# Create an empty model\nfrom keras.models import Sequential\nmodel = Sequential()\n\n# Open the file\nwith open(filename + 'report.txt','w') as fh:\n    # Pass the file handle in as a lambda function to make it callable\n    model.summary(print_fn=lambda x: fh.write(x + '\\n'))\n
\n\n

This outputs the following lines to the file:

\n\n
_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\nTotal params: 0\nTrainable params: 0\nNon-trainable params: 0\n_________________________________________________________________\n
\n", "system": "" }, { "instruction": "How to "reset" tensorboard data after killing tensorflow instance", "input": "", "output": "

Note: The solution you've posted (erase TensorBoard's log files and kill the process) will work, but it isn't preferred, because it destroys historical information about your training.

\n\n

Instead, you can have each new training job write to a new subdirectory (of your top-level log directory). Then, TensorBoard will consider each job a new \"run\" and will create a nice comparison view so you can see how the training differed between iterations of your model.

\n\n

In the following an example from https://www.tensorflow.org/tensorboard/get_started:

\n\n
model = create_model()\n...\nmodel.compile(...)\n\nlog_dir = \"logs/fit/\" + datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\ntensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)\n\nmodel.fit(..., callbacks=[tensorboard_callback])\n
\n", "system": "" }, { "instruction": "What is the difference between variable_scope and name_scope?", "input": "", "output": "

I had problems understanding the difference between variable_scope and name_scope (they looked almost the same) before I tried to visualize everything by creating a simple example:

\n\n
import tensorflow as tf\ndef scoping(fn, scope1, scope2, vals):\n    with fn(scope1):\n        a = tf.Variable(vals[0], name='a')\n        b = tf.get_variable('b', initializer=vals[1])\n        c = tf.constant(vals[2], name='c')\n        with fn(scope2):\n            d = tf.add(a * b, c, name='res')\n\n        print '\\n  '.join([scope1, a.name, b.name, c.name, d.name]), '\\n'\n    return d\n\nd1 = scoping(tf.variable_scope, 'scope_vars', 'res', [1, 2, 3])\nd2 = scoping(tf.name_scope,     'scope_name', 'res', [1, 2, 3])\n\nwith tf.Session() as sess:\n    writer = tf.summary.FileWriter('logs', sess.graph)\n    sess.run(tf.global_variables_initializer())\n    print sess.run([d1, d2])\n    writer.close()\n
\n\n

Here I create a function that creates some variables and constants and groups them in scopes (depending by the type I provided). In this function I also print the names of all the variables. After that I executes the graph to get values of the resulting values and save event-files to investigate them in tensorboard. If you run this, you will get the following:

\n\n
scope_vars\n  scope_vars/a:0\n  scope_vars/b:0\n  scope_vars/c:0\n  scope_vars/res/res:0 \n\nscope_name\n  scope_name/a:0\n  b:0\n  scope_name/c:0\n  scope_name/res/res:0 \n
\n\n

You see the similar pattern if you open TB (as you see b is outside of scope_name rectangular):\n\"enter

\n\n
\n\n

This gives you the answer:

\n\n

Now you see that tf.variable_scope() adds a prefix to the names of all variables (no matter how you create them), ops, constants. On the other hand tf.name_scope() ignores variables created with tf.get_variable() because it assumes that you know which variable and in which scope you wanted to use.

\n\n

A good documentation on Sharing variables tells you that

\n\n
\n

tf.variable_scope(): Manages namespaces for names passed to tf.get_variable().

\n
\n\n

The same documentation provides a more details how does Variable Scope work and when it is useful.

\n", "system": "" }, { "instruction": "Tensorflow doesn't seem to see my gpu", "input": "", "output": "

I came across this same issue in jupyter notebooks. This could be an easy fix.

\n
$ pip uninstall tensorflow\n$ pip install tensorflow-gpu\n
\n

You can check if it worked with:

\n
tf.test.gpu_device_name()\n
\n

Update 2020

\n

It seems like tensorflow 2.0+ comes with gpu capabilities therefore\npip install tensorflow should be enough

\n", "system": "" }, { "instruction": "Tensorflow One Hot Encoder?", "input": "", "output": "

As of TensorFlow 0.8, there is now a native one-hot op, tf.one_hot that can convert a set of sparse labels to a dense one-hot representation. This is in addition to tf.nn.sparse_softmax_cross_entropy_with_logits, which can in some cases let you compute the cross entropy directly on the sparse labels instead of converting them to one-hot.

\n\n

Previous answer, in case you want to do it the old way:\n@Salvador's answer is correct - there (used to be) no native op to do it. Instead of doing it in numpy, though, you can do it natively in tensorflow using the sparse-to-dense operators:

\n\n
num_labels = 10\n\n# label_batch is a tensor of numeric labels to process\n# 0 <= label < num_labels\n\nsparse_labels = tf.reshape(label_batch, [-1, 1])\nderived_size = tf.shape(label_batch)[0]\nindices = tf.reshape(tf.range(0, derived_size, 1), [-1, 1])\nconcated = tf.concat(1, [indices, sparse_labels])\noutshape = tf.pack([derived_size, num_labels])\nlabels = tf.sparse_to_dense(concated, outshape, 1.0, 0.0)\n
\n\n

The output, labels, is a one-hot matrix of batch_size x num_labels.

\n\n

Note also that as of 2016-02-12 (which I assume will eventually be part of a 0.7 release), TensorFlow also has the tf.nn.sparse_softmax_cross_entropy_with_logits op, which in some cases can let you do training without needing to convert to a one-hot encoding.

\n\n

Edited to add: At the end, you may need to explicitly set the shape of labels. The shape inference doesn't recognize the size of the num_labels component. If you don't need a dynamic batch size with derived_size, this can be simplified.

\n\n

Edited 2016-02-12 to change the assignment of outshape per comment below.

\n", "system": "" }, { "instruction": "Split a dataset created by Tensorflow dataset API in to Train and Test?", "input": "", "output": "

Assuming you have all_dataset variable of tf.data.Dataset type:

\n\n
test_dataset = all_dataset.take(1000) \ntrain_dataset = all_dataset.skip(1000)\n
\n\n

Test dataset now has first 1000 elements and the rest goes for training.

\n", "system": "" }, { "instruction": "Keras - Difference between categorical_accuracy and sparse_categorical_accuracy", "input": "", "output": "

So in categorical_accuracy you need to specify your target (y) as one-hot encoded vector (e.g. in case of 3 classes, when a true class is second class, y should be (0, 1, 0). In sparse_categorical_accuracy you need should only provide an integer of the true class (in the case from previous example - it would be 1 as classes indexing is 0-based).

\n", "system": "" }, { "instruction": "In TensorFlow, what is tf.identity used for?", "input": "", "output": "

After some stumbling I think I've noticed a single use case that fits all the examples I've seen. If there are other use cases, please elaborate with an example.

\n\n

Use case:

\n\n

Suppose you'd like to run an operator every time a particular Variable is evaluated. For example, say you'd like to add one to x every time the variable y is evaluated. It might seem like this will work:

\n\n
x = tf.Variable(0.0)\nx_plus_1 = tf.assign_add(x, 1)\n\nwith tf.control_dependencies([x_plus_1]):\n    y = x\ninit = tf.initialize_all_variables()\n\nwith tf.Session() as session:\n    init.run()\n    for i in xrange(5):\n        print(y.eval())\n
\n\n

It doesn't: it'll print 0, 0, 0, 0, 0. Instead, it seems that we need to add a new node to the graph within the control_dependencies block. So we use this trick:

\n\n
x = tf.Variable(0.0)\nx_plus_1 = tf.assign_add(x, 1)\n\nwith tf.control_dependencies([x_plus_1]):\n    y = tf.identity(x)\ninit = tf.initialize_all_variables()\n\nwith tf.Session() as session:\n    init.run()\n    for i in xrange(5):\n        print(y.eval())\n
\n\n

This works: it prints 1, 2, 3, 4, 5.

\n\n

If in the CIFAR-10 tutorial we dropped tf.identity, then loss_averages_op would never run.

\n", "system": "" }, { "instruction": "Module 'tensorflow' has no attribute 'contrib'", "input": "", "output": "

tf.contrib has moved out of TF starting TF 2.0 alpha.
\nTake a look at these tf 2.0 release notes https://github.com/tensorflow/tensorflow/releases/tag/v2.0.0-alpha0
\nYou can upgrade your TF 1.x code to TF 2.x using the tf_upgrade_v2 script\nhttps://www.tensorflow.org/alpha/guide/upgrade

\n", "system": "" }, { "instruction": "tf.data.Dataset: how to get the dataset size (number of elements in an epoch)?", "input": "", "output": "

len(list(dataset)) works in eager mode, although that's obviously not a good general solution.

\n", "system": "" }, { "instruction": "Why the 6 in relu6?", "input": "", "output": "

From this reddit thread:

\n\n
\n

This is useful in making the networks ready for fixed-point inference.\n If you unbound the upper limit, you lose too many bits to the Q part\n of a Q.f number. Keeping the ReLUs bounded by 6 will let them take a\n max of 3 bits (upto 8) leaving 4/5 bits for .f

\n
\n\n

It seems, then, that 6 is just an arbitrary value chosen according to the number of bits you want to be able to compress your network's trained parameters into.\nAs per the \"why\" only the version with value 6 is implemented, I assume it's because that's the value that fits best in 8 bits, which probably is the most common use-case.

\n", "system": "" }, { "instruction": "Unbalanced data and weighted cross entropy", "input": "", "output": "

Note that weighted_cross_entropy_with_logits is the weighted variant of sigmoid_cross_entropy_with_logits. Sigmoid cross entropy is typically used for binary classification. Yes, it can handle multiple labels, but sigmoid cross entropy basically makes a (binary) decision on each of them -- for example, for a face recognition net, those (not mutually exclusive) labels could be \"Does the subject wear glasses?\", \"Is the subject female?\", etc.

\n\n

In binary classification(s), each output channel corresponds to a binary (soft) decision. Therefore, the weighting needs to happen within the computation of the loss. This is what weighted_cross_entropy_with_logits does, by weighting one term of the cross-entropy over the other.

\n\n

In mutually exclusive multilabel classification, we use softmax_cross_entropy_with_logits, which behaves differently: each output channel corresponds to the score of a class candidate. The decision comes after, by comparing the respective outputs of each channel.

\n\n

Weighting in before the final decision is therefore a simple matter of modifying the scores before comparing them, typically by multiplication with weights. For example, for a ternary classification task,

\n\n
# your class weights\nclass_weights = tf.constant([[1.0, 2.0, 3.0]])\n# deduce weights for batch samples based on their true label\nweights = tf.reduce_sum(class_weights * onehot_labels, axis=1)\n# compute your (unweighted) softmax cross entropy loss\nunweighted_losses = tf.nn.softmax_cross_entropy_with_logits(onehot_labels, logits)\n# apply the weights, relying on broadcasting of the multiplication\nweighted_losses = unweighted_losses * weights\n# reduce the result to get your final loss\nloss = tf.reduce_mean(weighted_losses)\n
\n\n

You could also rely on tf.losses.softmax_cross_entropy to handle the last three steps.

\n\n

In your case, where you need to tackle data imbalance, the class weights could indeed be inversely proportional to their frequency in your train data. Normalizing them so that they sum up to one or to the number of classes also makes sense.

\n\n

Note that in the above, we penalized the loss based on the true label of the samples. We could also have penalized the loss based on the estimated labels by simply defining

\n\n
weights = class_weights\n
\n\n

and the rest of the code need not change thanks to broadcasting magic.

\n\n

In the general case, you would want weights that depend on the kind of error you make. In other words, for each pair of labels X and Y, you could choose how to penalize choosing label X when the true label is Y. You end up with a whole prior weight matrix, which results in weights above being a full (num_samples, num_classes) tensor. This goes a bit beyond what you want, but it might be useful to know nonetheless that only your definition of the weight tensor need to change in the code above.

\n", "system": "" }, { "instruction": "TensorFlow - Importing data from a TensorBoard TFEvent file?", "input": "", "output": "

As Fabrizio says, TensorBoard is a great tool for visualizing the contents of your summary logs. However, if you want to perform a custom analysis, you can use tf.train.summary_iterator() function to loop over all of the tf.Event and tf.Summary protocol buffers in the log:

\n\n
for summary in tf.train.summary_iterator(\"/path/to/log/file\"):\n    # Perform custom processing in here.\n
\n\n

UPDATE for tf2:

\n\n
from tensorflow.python.summary.summary_iterator import summary_iterator\n
\n\n

You need to import it, that module level is not currently imported by default. On 2.0.0-rc2

\n", "system": "" }, { "instruction": "How to add if condition in a TensorFlow graph?", "input": "", "output": "

You're correct that the if statement doesn't work here, because the condition is evaluated at graph construction time, whereas presumably you want the condition to depend on the value fed to the placeholder at runtime. (In fact, it will always take the first branch, because condition > 0 evaluates to a Tensor, which is \"truthy\" in Python.)

\n\n

To support conditional control flow, TensorFlow provides the tf.cond() operator, which evaluates one of two branches, depending on a boolean condition. To show you how to use it, I'll rewrite your program so that condition is a scalar tf.int32 value for simplicity:

\n\n
x = tf.placeholder(tf.float32, shape=[None, ins_size**2*3], name=\"x_input\")\ncondition = tf.placeholder(tf.int32, shape=[], name=\"condition\")\nW = tf.Variable(tf.zeros([ins_size**2 * 3, label_option]), name=\"weights\")\nb = tf.Variable(tf.zeros([label_option]), name=\"bias\")\n\ny = tf.cond(condition > 0, lambda: tf.matmul(x, W) + b, lambda: tf.matmul(x, W) - b)\n
\n", "system": "" }, { "instruction": "How do I use TensorFlow GPU?", "input": "", "output": "

Follow this tutorial Tensorflow GPU I did it and it works perfect.

\n

Attention! - install version 9.0! newer version is not supported by Tensorflow-gpu

\n

Steps:

\n
    \n
  1. Uninstall your old tensorflow
  2. \n
  3. Install tensorflow-gpu pip install tensorflow-gpu
  4. \n
  5. Install Nvidia Graphics Card & Drivers (you probably already have)
  6. \n
  7. Download & Install CUDA
  8. \n
  9. Download & Install cuDNN
  10. \n
  11. Verify by simple program
  12. \n
\n
from tensorflow.python.client import device_lib \nprint(device_lib.list_local_devices())\n
\n", "system": "" }, { "instruction": "Nvidia Cudatoolkit vs Conda Cudatoolkit", "input": "", "output": "

If using anaconda to install tensorflow-gpu, yes it will install cuda and cudnn for you in same conda environment as tensorflow-gpu. All you need to install yourself is the latest nvidia-driver (so that it works with the latest CUDA level and all older CUDA levels you use.)

\n\n

This has many advantages over the pip install tensorflow-gpu method:

\n\n
    \n
  1. Anaconda will always install the CUDA and CuDNN version that the TensorFlow code was compiled to use.
  2. \n
  3. You can have multiple conda environments with different levels of TensorFlow, CUDA, and CuDNN and just use conda activate to switch between them.
  4. \n
  5. You don't have to deal with installing CUDA and cuDNN manaually at the system wide level.
  6. \n
\n\n

The disadvantage when compared to pip install tensorflow-gpu, is the latest version of tensorflow is added to pypi weeks before Anaconda is able to update the conda recipe and publish their builds of the latest TensorFlow version.

\n", "system": "" }, { "instruction": ""Could not interpret optimizer identifier" error in Keras", "input": "", "output": "

The reason is you are using tensorflow.python.keras API for model and layers and keras.optimizers for SGD. They are two different Keras versions of TensorFlow and pure Keras. They could not work together. You have to change everything to one version. Then it should work.

\n", "system": "" }, { "instruction": "How do I check if keras is using gpu version of tensorflow?", "input": "", "output": "

You are using the GPU version. You can list the available tensorflow devices with (also check this question):

\n\n
from tensorflow.python.client import device_lib\nprint(device_lib.list_local_devices()) # list of DeviceAttributes\n
\n\n

EDIT:

\n\n

With tensorflow >= 1.4 you can run the following function:

\n\n
import tensorflow as tf\ntf.test.is_gpu_available() # True/False\n\n# Or only check for gpu's with cuda support\ntf.test.is_gpu_available(cuda_only=True) \n
\n\n

EDIT 2:

\n\n

The above function is deprecated in tensorflow > 2.1. Instead you should use the following function:

\n\n
import tensorflow as tf\ntf.config.list_physical_devices('GPU')\n
\n\n
\n\n

NOTE:

\n\n

In your case both the cpu and gpu are available, if you use the cpu version of tensorflow the gpu will not be listed. In your case, without setting your tensorflow device (with tf.device(\"..\")), tensorflow will automatically pick your gpu!

\n\n

In addition, your sudo pip3 list clearly shows you are using tensorflow-gpu. If you would have the tensoflow cpu version the name would be something like tensorflow(1.1.0).

\n\n

Check this issue for information about the warnings.

\n", "system": "" }, { "instruction": "AttributeError: module 'tensorflow' has no attribute 'reset_default_graph'", "input": "", "output": "

This function is deprecated. \nUse tf.compat.v1.reset_default_graph() instead.

\n\n

Update\nThis is not the only function to be out of date. Check out this answer for release notes and a conversion script.

\n", "system": "" }, { "instruction": "Convert Keras model to C++", "input": "", "output": "

To answer my own question and have a solution - I wrote a plain c++ solution called keras2cpp (its code available on github).

\n\n

In this solution you store network architecture (in json) and weights (in hdf5). Then you can dump a network to a plain text file with provided script. You can use obtained text file with network in pure c++ code. There are no dependencies on python libraries or hdf5. It should work for theano and tensorflow backend.

\n", "system": "" }, { "instruction": "Remove nodes from graph or reset entire default graph", "input": "", "output": "

Update 11/2/2016

\n\n

tf.reset_default_graph()

\n\n

Old stuff

\n\n

There's reset_default_graph, but not part of public API (I think it should be, does someone wants to file an issue on GitHub?)

\n\n

My work-around to reset things is this:

\n\n
from tensorflow.python.framework import ops\nops.reset_default_graph()\nsess = tf.InteractiveSession()\n
\n", "system": "" }, { "instruction": "What's the difference between scikit-learn and tensorflow? Is it possible to use them together?", "input": "", "output": "

The Tensorflow is a library for constructing Neural Networks. The scikit-learn contains ready to use algorithms. The TF can work with a variety of data types: tabular, text, images, audio. The scikit-learn is intended to work with tabular data.

\n

Yes, you can use both packages. But if you need only classic Multi-Layer implementation then the MLPClassifier and MLPRegressor available in scikit-learn is a very good choice. I have run a comparison of MLP implemented in TF vs Scikit-learn and there weren't significant differences and scikit-learn MLP works about 2 times faster than TF on CPU. You can read the details of the comparison in my blog post.

\n

Below the scatter plots of performance comparison:

\n

\"Tensorflow

\n

\"Tensorflow

\n", "system": "" }, { "instruction": "How to define max_queue_size, workers and use_multiprocessing in keras fit_generator()?", "input": "", "output": "

Q_0:

\n\n
\n

Question: Does this refer to how many batches are prepared on CPU? How is it related to workers? How to define it optimally?

\n
\n\n

From the link you posted, you can learn that your CPU keeps creating batches until the queue is at the maximum queue size or reaches the stop. You want to have batches ready for your GPU to \"take\" so that the GPU doesn't have to wait for the CPU. \nAn ideal value for the queue size would be to make it large enough that your GPU is always running near the maximum and never has to wait for the CPU to prepare new batches.

\n\n

Q_1:

\n\n
\n

Question: How do I find out how many batches my CPU can/should generate in parallel?

\n
\n\n

If you see that your GPU is idling and waiting for batches, try to increase the amount of workers and perhaps also the queue size.

\n\n

Q_2:

\n\n
\n

Do I have to set this parameter to true if I change workers? Does it relate to CPU usage?

\n
\n\n

Here is a practical analysis of what happens when you set it to True or False. Here is a recommendation to set it to False to prevent freezing (in my setup True works fine without freezing). Perhaps someone else can increase our understanding of the topic.

\n\n

In summary:

\n\n

Try not to have a sequential setup, try to enable the CPU to provide enough data for the GPU.\n\"\"

\n\n

Also: You could (should?) create several questions the next time, so that it is easier to answer them.

\n", "system": "" }, { "instruction": "Dimension of shape in conv1D", "input": "", "output": "

td; lr you need to reshape you data to have a spatial dimension for Conv1d to make sense:

\n
X = np.expand_dims(X, axis=2) # reshape (569, 30) to (569, 30, 1) \n# now input can be set as \nmodel.add(Conv1D(2,2,activation='relu',input_shape=(30, 1))\n
\n

Essentially reshaping a dataset that looks like this:

\n
features    \n.8, .1, .3  \n.2, .4, .6  \n.7, .2, .1  \n
\n

To:

\n
[[.8\n.1\n.3],\n\n[.2,\n .4,\n .6\n ],\n\n[.7,\n .2,\n .1]]\n \n
\n

Explanation and examples

\n

Normally convolution works over spatial dimensions. The kernel is "convolved" over the dimension producing a tensor. In the case of Conv1D, the kernel is passed over the 'steps' dimension of every example.

\n

You will see Conv1D used in NLP where steps is a number of words in the sentence (padded to some fixed maximum length). The words would be encoded as vectors of length 4.

\n

Here is an example sentence:

\n
jack   .1   .3   -.52   |\nis     .05  .8,  -.7    |<--- kernel is `convolving` along this dimension.\na      .5   .31  -.2    |\nboy    .5   .8   -.4   \\|/\n
\n

And the way we would set the input to the conv in this case:

\n
maxlen = 4\ninput_dim = 3\nmodel.add(Conv1D(2,2,activation='relu',input_shape=(maxlen, input_dim))\n
\n

In your case, you will treat the features as the spatial dimensions with each feature having length 1. (see below)

\n

Here would be an example from your dataset

\n
att1   .04    |\natt2   .05    |  < -- kernel convolving along this dimension\natt3   .1     |       notice the features have length 1. each\natt4   .5    \\|/      example have these 4 featues.\n
\n

And we would set the Conv1D example as:

\n
maxlen = num_features = 4 # this would be 30 in your case\ninput_dim = 1 # since this is the length of _each_ feature (as shown above)\n\nmodel.add(Conv1D(2,2,activation='relu',input_shape=(maxlen, input_dim))\n
\n

As you see your dataset has to be reshaped in to (569, 30, 1)\nuse:

\n
X = np.expand_dims(X, axis=2) # reshape (569, 30, 1) \n# now input can be set as \nmodel.add(Conv1D(2,2,activation='relu',input_shape=(30, 1))\n
\n

Here is a full-fledged example that you can run (I'll use the Functional API)

\n
from keras.models import Model\nfrom keras.layers import Conv1D, Dense, MaxPool1D, Flatten, Input\nimport numpy as np\n\ninp =  Input(shape=(5, 1))\nconv = Conv1D(filters=2, kernel_size=2)(inp)\npool = MaxPool1D(pool_size=2)(conv)\nflat = Flatten()(pool)\ndense = Dense(1)(flat)\nmodel = Model(inp, dense)\nmodel.compile(loss='mse', optimizer='adam')\n\nprint(model.summary())\n\n# get some data\nX = np.expand_dims(np.random.randn(10, 5), axis=2)\ny = np.random.randn(10, 1)\n\n# fit model\nmodel.fit(X, y)\n
\n", "system": "" }, { "instruction": "tf.nn.conv2d vs tf.layers.conv2d", "input": "", "output": "

As GBY mentioned, they use the same implementation.

\n\n

There is a slight difference in the parameters.

\n\n

For tf.nn.conv2d:

\n\n
filter: A Tensor. Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels]\n
\n\n

For tf.layers.conv2d:

\n\n
filters: Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).\n
\n\n

I would use tf.nn.conv2d when loading a pretrained model (example code: https://github.com/ry/tensorflow-vgg16), and tf.layers.conv2d for a model trained from scratch.

\n", "system": "" }, { "instruction": "How to set specific gpu in tensorflow?", "input": "", "output": "

There are 3 ways to achieve this:

\n\n
    \n
  1. Using CUDA_VISIBLE_DEVICES environment variable.\nby setting environment variable CUDA_VISIBLE_DEVICES=\"1\" makes only device 1 visible and by setting CUDA_VISIBLE_DEVICES=\"0,1\" makes devices 0 and 1 visible. You can do this in python by having a line os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0,1\" after importing os package.

  2. \n
  3. Using with tf.device('/gpu:2') and creating the graph. Then it will use GPU device 2 to run.

  4. \n
  5. Using config = tf.ConfigProto(device_count = {'GPU': 1}) and then sess = tf.Session(config=config). This will use GPU device 1.

  6. \n
\n", "system": "" }, { "instruction": "TensorFlow - regularization with L2 loss, how to apply to all weights, not just last one?", "input": "", "output": "

A shorter and scalable way of doing this would be ;

\n\n
vars   = tf.trainable_variables() \nlossL2 = tf.add_n([ tf.nn.l2_loss(v) for v in vars ]) * 0.001\n
\n\n

This basically sums the l2_loss of all your trainable variables. You could also make a dictionary where you specify only the variables you want to add to your cost and use the second line above. Then you can add lossL2 with your softmax cross entropy value in order to calculate your total loss.

\n\n

Edit : As mentioned by Piotr Dabkowski, the code above will also regularise biases. This can be avoided by adding an if statement in the second line ;

\n\n
lossL2 = tf.add_n([ tf.nn.l2_loss(v) for v in vars\n                    if 'bias' not in v.name ]) * 0.001\n
\n\n

This can be used to exclude other variables.

\n", "system": "" }, { "instruction": "Simple way to visualize a TensorFlow graph in Jupyter?", "input": "", "output": "

Here's a recipe I copied from one of Alex Mordvintsev deep dream notebook at some point

\n\n
from IPython.display import clear_output, Image, display, HTML\nimport numpy as np    \n\ndef strip_consts(graph_def, max_const_size=32):\n    \"\"\"Strip large constant values from graph_def.\"\"\"\n    strip_def = tf.GraphDef()\n    for n0 in graph_def.node:\n        n = strip_def.node.add() \n        n.MergeFrom(n0)\n        if n.op == 'Const':\n            tensor = n.attr['value'].tensor\n            size = len(tensor.tensor_content)\n            if size > max_const_size:\n                tensor.tensor_content = \"<stripped %d bytes>\"%size\n    return strip_def\n\ndef show_graph(graph_def, max_const_size=32):\n    \"\"\"Visualize TensorFlow graph.\"\"\"\n    if hasattr(graph_def, 'as_graph_def'):\n        graph_def = graph_def.as_graph_def()\n    strip_def = strip_consts(graph_def, max_const_size=max_const_size)\n    code = \"\"\"\n        <script>\n          function load() {{\n            document.getElementById(\"{id}\").pbtxt = {data};\n          }}\n        </script>\n        <link rel=\"import\" href=\"https://tensorboard.appspot.com/tf-graph-basic.build.html\" onload=load()>\n        <div style=\"height:600px\">\n          <tf-graph-basic id=\"{id}\"></tf-graph-basic>\n        </div>\n    \"\"\".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))\n\n    iframe = \"\"\"\n        <iframe seamless style=\"width:1200px;height:620px;border:0\" srcdoc=\"{}\"></iframe>\n    \"\"\".format(code.replace('\"', '&quot;'))\n    display(HTML(iframe))\n
\n\n

Then to visualize current graph

\n\n
show_graph(tf.get_default_graph().as_graph_def())\n
\n\n

If your graph is saved as pbtxt, you could do

\n\n
gdef = tf.GraphDef()\nfrom google.protobuf import text_format\ntext_format.Merge(open(\"tf_persistent.pbtxt\").read(), gdef)\nshow_graph(gdef)\n
\n\n

You'll see something like this

\n\n

\"enter

\n", "system": "" }, { "instruction": "Tensorflow NaN bug?", "input": "", "output": "

Actually, it turned out to be something stupid. I'm posting this in case anyone else would run into a similar error.

\n\n
cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))\n
\n\n

is actually a horrible way of computing the cross-entropy. In some samples, certain classes could be excluded with certainty after a while, resulting in y_conv=0 for that sample. That's normally not a problem since you're not interested in those, but in the way cross_entropy is written there, it yields 0*log(0) for that particular sample/class. Hence the NaN.

\n\n

Replacing it with

\n\n
cross_entropy = -tf.reduce_sum(y_*tf.log(tf.clip_by_value(y_conv,1e-10,1.0)))\n
\n\n

solved all my problems.

\n", "system": "" }, { "instruction": "Tensorflow vs OpenCV", "input": "", "output": "

The main difference is that TensorFlow is a framework for machine learning, and OpenCV is a library for computer vision. It can be a good start to check the link below to get a grasp for the difference between framework and library: What is the difference between a framework and a library?

\n\n

You can do image recognition with TensorFlow. Though it is suited for more general problems as well, such as: classification, clustering and regression.

\n\n

I guess people downvoted because this question might be more relevant to: https://datascience.stackexchange.com/

\n", "system": "" }, { "instruction": "No module named 'tqdm'", "input": "", "output": "

You need to install tqdm module, you can do it by using python pip.

\n\n
pip install tqdm\n
\n\n

for more info tqdm

\n", "system": "" }, { "instruction": "How to understand static shape and dynamic shape in TensorFlow?", "input": "", "output": "

Sometimes the shape of a tensor depends on a value that is computed at runtime. Let's take the following example, where x is defined as a tf.placeholder() vector with four elements:

\n
x = tf.placeholder(tf.int32, shape=[4])\nprint x.get_shape()\n# ==> '(4,)'\n
\n

The value of x.get_shape() is the static shape of x, and the (4,) means that it is a vector of length 4. Now let's apply the tf.unique() op to x

\n
y, _ = tf.unique(x)\nprint y.get_shape()\n# ==> '(?,)'\n
\n

The (?,) means that y is a vector of unknown length. Why is it unknown? tf.unique(x) returns the unique values from x, and the values of x are unknown because it is a tf.placeholder(), so it doesn't have a value until you feed it. Let's see what happens if you feed two different values:

\n
sess = tf.Session()\nprint sess.run(y, feed_dict={x: [0, 1, 2, 3]}).shape\n# ==> '(4,)'\nprint sess.run(y, feed_dict={x: [0, 0, 0, 0]}).shape\n# ==> '(1,)'\n
\n

Hopefully this makes it clear that a tensor can have a different static and dynamic shape. The dynamic shape is always fully defined\u2014it has no ? dimensions\u2014but the static shape can be less specific. This is what allows TensorFlow to support operations like tf.unique() and tf.dynamic_partition(), which can have variable-sized outputs, and are used in advanced applications.

\n

Finally, the tf.shape() op can be used to get the dynamic shape of a tensor and use it in a TensorFlow computation:

\n
z = tf.shape(y)\nprint sess.run(z, feed_dict={x: [0, 1, 2, 3]})\n# ==> [4]\nprint sess.run(z, feed_dict={x: [0, 0, 0, 0]})\n# ==> [1]\n
\n

Here's a schematic image showing both:\n\"enter

\n", "system": "" }, { "instruction": "Error running basic tensorflow example", "input": "", "output": "

From the path in your stack trace (/git/tensorflow/tensorflow/\u2026), it looks like your Python path may be loading the tensorflow libraries from the source directory, rather than the version that you have installed. As a result, it is unable to find the (compiled) pywrap_tensorflow library, which is installed in a different directory.

\n\n

A common solution is to cd out of the /git/tensorflow directory before starting python or ipython.

\n", "system": "" }, { "instruction": "ValueError: Shapes (None, 1) and (None, 2) are incompatible", "input": "", "output": "

i was facing the same problem\nmy shapes were

\n
shape of X (271, 64, 64, 3)\nshape of y (271,)\nshape of trainX (203, 64, 64, 3)\nshape of trainY (203, 1)\nshape of testX (68, 64, 64, 3)\nshape of testY (68, 1)\n
\n

and

\n
loss="categorical_crossentropy"\n
\n

i changed it to

\n
loss="sparse_categorical_crossentropy"\n
\n

and it worked like a charm for me

\n", "system": "" }, { "instruction": "tf.data with multiple inputs / outputs in Keras", "input": "", "output": "

I'm not using Keras but I would go with an tf.data.Dataset.from_generator() - like:

\n\n
def _input_fn():\n  sent1 = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=np.int64)\n  sent2 = np.array([20, 25, 35, 40, 600, 30, 20, 30], dtype=np.int64)\n  sent1 = np.reshape(sent1, (8, 1, 1))\n  sent2 = np.reshape(sent2, (8, 1, 1))\n\n  labels = np.array([40, 30, 20, 10, 80, 70, 50, 60], dtype=np.int64)\n  labels = np.reshape(labels, (8, 1))\n\n  def generator():\n    for s1, s2, l in zip(sent1, sent2, labels):\n      yield {\"input_1\": s1, \"input_2\": s2}, l\n\n  dataset = tf.data.Dataset.from_generator(generator, output_types=({\"input_1\": tf.int64, \"input_2\": tf.int64}, tf.int64))\n  dataset = dataset.batch(2)\n  return dataset\n\n...\n\nmodel.fit(_input_fn(), epochs=10, steps_per_epoch=4)\n
\n\n

This generator can iterate over your e.g text-files / numpy arrays and yield on every call a example.\nIn this example, I assume that the word of the sentences are already converted to the indices in the vocabulary.

\n\n

Edit:\nSince OP asked, it should be also possible with Dataset.from_tensor_slices():

\n\n
def _input_fn():\n  sent1 = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=np.int64)\n  sent2 = np.array([20, 25, 35, 40, 600, 30, 20, 30], dtype=np.int64)\n  sent1 = np.reshape(sent1, (8, 1))\n  sent2 = np.reshape(sent2, (8, 1))\n\n  labels = np.array([40, 30, 20, 10, 80, 70, 50, 60], dtype=np.int64)\n  labels = np.reshape(labels, (8))\n\n  dataset = tf.data.Dataset.from_tensor_slices(({\"input_1\": sent1, \"input_2\": sent2}, labels))\n  dataset = dataset.batch(2, drop_remainder=True)\n  return dataset\n
\n", "system": "" }, { "instruction": "How to check if keras tensorflow backend is GPU or CPU version?", "input": "", "output": "

Also you can check using Keras backend function:

\n\n
from keras import backend as K\nK.tensorflow_backend._get_available_gpus()\n
\n\n

I test this on Keras (2.1.1)

\n", "system": "" }, { "instruction": "TensorFlow: Blas GEMM launch failed", "input": "", "output": "

This worked for me on TensorFlow 2.1.0 (per: https://www.tensorflow.org/api_docs/python/tf/config/experimental/set_memory_growth)

\n
import tensorflow as tf\nphysical_devices = tf.config.list_physical_devices('GPU') \nfor device in physical_devices:\n    tf.config.experimental.set_memory_growth(device, True)\n
\n", "system": "" }, { "instruction": "Negative dimension size caused by subtracting 3 from 1 for 'Conv2D'", "input": "", "output": "

Your issue comes from the image_ordering_dim in keras.json.

\n\n

From Keras Image Processing doc:

\n\n
\n

dim_ordering: One of {\"th\", \"tf\"}. \"tf\" mode means that the images should have shape (samples, height, width, channels), \"th\" mode means that the images should have shape (samples, channels, height, width). It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be \"tf\".

\n
\n\n

Keras maps the convolution operation to the chosen backend (theano or tensorflow). However, both backends have made different choices for the ordering of the dimensions. If your image batch is of N images of HxW size with C channels, theano uses the NCHW ordering while tensorflow uses the NHWC ordering.

\n\n

Keras allows you to choose which ordering you prefer and will do the conversion to map to the backends behind. But if you choose image_ordering_dim=\"th\" it expects Theano-style ordering (NCHW, the one you have in your code) and if image_ordering_dim=\"tf\" it expects tensorflow-style ordering (NHWC).

\n\n

Since your image_ordering_dim is set to \"tf\", if you reshape your data to the tensorflow style it should work:

\n\n
X_train = X_train.reshape(X_train.shape[0], img_cols, img_rows, 1)\nX_test = X_test.reshape(X_test.shape[0], img_cols, img_rows, 1)\n
\n\n

and

\n\n
input_shape=(img_cols, img_rows, 1)\n
\n", "system": "" }, { "instruction": "Is there a way to suppress the messages TensorFlow prints?", "input": "", "output": "

UPDATE (beyond 1.14): see my more thorough answer here (this is a dupe question anyway): https://stackoverflow.com/a/38645250/6557588

\n

In addition to Wintro's answer, you can also disable/suppress TensorFlow logs from the C side (i.e. the uglier ones starting with single characters: I, E, etc.); the issue open regarding logging has been updated to state that you can now control logging via an environmental variable. You can now change the level by setting the environmental variable called TF_CPP_MIN_LOG_LEVEL; it defaults to 0 (all logs shown), but can be set to 1 to filter out INFO logs, 2 to additionally filter out WARNING logs, and 3 to additionally filter out ERROR logs. It appears to be in master now, and will likely be a part of future version (i.e. versions after r0.11). See this page for more information. Here is an example of changing the verbosity using Python:

\n
import os\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'  # or any {'0', '1', '2'}\nimport tensorflow as tf\n
\n

You can set this environmental variable in the environment that you run your script in. For example, with bash this can be in the file ~/.bashrc, /etc/environment, /etc/profile, or in the actual shell as:

\n
TF_CPP_MIN_LOG_LEVEL=2 python my_tf_script.py\n
\n", "system": "" }, { "instruction": "tf.shape() get wrong shape in tensorflow", "input": "", "output": "

tf.shape(input, name=None) returns a 1-D integer tensor representing the shape of input.

\n\n

You're looking for: x.get_shape() that returns the TensorShape of the x variable.

\n\n

Update: I wrote an article to clarify the dynamic/static shapes in Tensorflow because of this answer: https://pgaleone.eu/tensorflow/2018/07/28/understanding-tensorflow-tensors-shape-static-dynamic/

\n", "system": "" }, { "instruction": "How to interpret Poolallocator messages in tensorflow?", "input": "", "output": "

TensorFlow has multiple memory allocators, for memory that will be used in different ways. Their behavior has some adaptive aspects.

\n\n

In your particular case, since you're using a GPU, there is a PoolAllocator for CPU memory that is pre-registered with the GPU for fast DMA. A tensor that is expected to be transferred from CPU to GPU, e.g., will be allocated from this pool.

\n\n

The PoolAllocators attempt to amortize the cost of calling a more expensive underlying allocator by keeping around a pool of allocated then freed chunks that are eligible for immediate reuse. Their default behavior is to grow slowly until the eviction rate drops below some constant. (The eviction rate is the proportion of free calls where we return an unused chunk from the pool to the underlying pool in order not to exceed the size limit.) In the log messages above, you see \"Raising pool_size_limit_\" lines that show the pool size growing. Assuming that your program actually has a steady state behavior with a maximum size collection of chunks it needs, the pool will grow to accommodate it, and then grow no more. It behaves this way rather than simply retaining all chunks ever allocated so that sizes needed only rarely, or only during program startup, are less likely to be retained in the pool.

\n\n

These messages should only be a cause for concern if you run out of memory. In such a case the log messages may help diagnose the problem. Note also that peak execution speed may only be attained after the memory pools have grown to the proper size.

\n", "system": "" }, { "instruction": "WARNING:tensorflow:sample_weight modes were coerced from ... to ['...']", "input": "", "output": "

This seems like a bogus message. I get the same warning message after upgrading to TensorFlow 2.1, but I do not use any class weights or sample weights at all. I do use a generator that returns a tuple like this:

\n\n
return inputs, targets\n
\n\n

And now I just changed it to the following to make the warning go away:

\n\n
return inputs, targets, [None]\n
\n\n

I don't know if this is relevant, but my model uses 3 inputs, so my inputs variable is actually a list of 3 numpy arrays. targets is just a single numpy array.

\n\n

In any case, it's just a warning. The training works fine either way.

\n\n

Edit for TensorFlow 2.2:

\n\n

This bug seems to have been fixed in TensorFlow 2.2, which is great. However the fix above will fail in TF 2.2, because it will try to get the shape of the sample weights, which will obviously fail with AttributeError: 'NoneType' object has no attribute 'shape'. So undo the above fix when upgrading to 2.2.

\n", "system": "" }, { "instruction": "Tensorflow Data Adapter Error: ValueError: Failed to find data adapter that can handle input", "input": "", "output": "

Have you checked whether your training/testing data and training/testing labels are all numpy arrays? It might be that you're mixing numpy arrays with lists.

\n", "system": "" }, { "instruction": "This model has not yet been built error on model.summary()", "input": "", "output": "

The error says what to do:

\n\n
\n

This model has not yet been built. Build the model first by calling build()

\n
\n\n
model.build(input_shape) # `input_shape` is the shape of the input data\n                         # e.g. input_shape = (None, 32, 32, 3)\nmodel.summary()\n
\n", "system": "" }, { "instruction": "Why can I not import Tensorflow.contrib I get an error of No module named 'tensorflow.python.saved", "input": "", "output": "

For anyone who is trying some old codes from github with Tensorflow 1.x.x versions while having Tensorflow 2.0.x please note that tf.contrib no longer exist in Tensorflow 2.0.x and it's modules were moved.
\nPlease google the name of the module without the tf.contrib part to know it's new location and thus migrating your code accordingly by correcting the import statement.

\n\n

Hope this helped!

\n", "system": "" }, { "instruction": "Keras ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5", "input": "", "output": "

The problem is input_shape.

\n\n

It should actually contain 3 dimensions only. And internally keras will add the batch dimension making it 4.

\n\n

Since you probably used input_shape with 4 dimensions (batch included), keras is adding the 5th.

\n\n

You should use input_shape=(32,32,1).

\n", "system": "" }, { "instruction": "Tensorflow dense gradient explanation?", "input": "", "output": "

This warning is printed when a sparse tf.IndexedSlices object is implicitly converted to a dense tf.Tensor. This typically happens when one op (usually tf.gather()) backpropagates a sparse gradient, but the op that receives it does not have a specialized gradient function that can handle sparse gradients. As a result, TensorFlow automatically densifies the tf.IndexedSlices, which can have a devastating effect on performance if the tensor is large.

\n\n

To fix this problem, you should try to ensure that the params input to tf.gather() (or the params inputs to tf.nn.embedding_lookup()) is a tf.Variable. Variables can receive the sparse updates directly, so no conversion is needed. Although tf.gather() (and tf.nn.embedding_lookup()) accept arbitrary tensors as inputs, this may lead to a more complicated backpropagation graph, resulting in implicit conversion.

\n", "system": "" }, { "instruction": "List of tensor names in graph in Tensorflow", "input": "", "output": "

The paper is not accurately reflecting the model. If you download the source from arxiv it has an accurate model description as model.txt, and the names in there correlate strongly with the names in the released model.

\n\n

To answer your first question, sess.graph.get_operations() gives you a list of operations. For an op, op.name gives you the name and op.values() gives you a list of tensors it produces (in the inception-v3 model, all tensor names are the op name with a \":0\" appended to it, so pool_3:0 is the tensor produced by the final pooling op.)

\n", "system": "" }, { "instruction": "How do I convert a directory of jpeg images to TFRecords file in tensorflow?", "input": "", "output": "

I hope this helps:

\n\n
filename_queue = tf.train.string_input_producer(['/Users/HANEL/Desktop/tf.png']) #  list of files to read\n\nreader = tf.WholeFileReader()\nkey, value = reader.read(filename_queue)\n\nmy_img = tf.image.decode_png(value) # use decode_png or decode_jpeg decoder based on your files.\n\ninit_op = tf.initialize_all_variables()\nwith tf.Session() as sess:\n  sess.run(init_op)\n\n# Start populating the filename queue.\n\ncoord = tf.train.Coordinator()\nthreads = tf.train.start_queue_runners(coord=coord)\n\nfor i in range(1): #length of your filename list\n  image = my_img.eval() #here is your image Tensor :) \n\nprint(image.shape)\nImage.show(Image.fromarray(np.asarray(image)))\n\ncoord.request_stop()\ncoord.join(threads)\n
\n\n

For getting all images as an array of tensors use the following code example.

\n\n

Github repo of ImageFlow

\n\n
\n\n

Update:

\n\n

In the previous answer I just told how to read an image in TF format, but not saving it in TFRecords. For that you should use:

\n\n
def _int64_feature(value):\n  return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))\n\n\ndef _bytes_feature(value):\n  return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))\n\n# images and labels array as input\ndef convert_to(images, labels, name):\n  num_examples = labels.shape[0]\n  if images.shape[0] != num_examples:\n    raise ValueError(\"Images size %d does not match label size %d.\" %\n                     (images.shape[0], num_examples))\n  rows = images.shape[1]\n  cols = images.shape[2]\n  depth = images.shape[3]\n\n  filename = os.path.join(FLAGS.directory, name + '.tfrecords')\n  print('Writing', filename)\n  writer = tf.python_io.TFRecordWriter(filename)\n  for index in range(num_examples):\n    image_raw = images[index].tostring()\n    example = tf.train.Example(features=tf.train.Features(feature={\n        'height': _int64_feature(rows),\n        'width': _int64_feature(cols),\n        'depth': _int64_feature(depth),\n        'label': _int64_feature(int(labels[index])),\n        'image_raw': _bytes_feature(image_raw)}))\n    writer.write(example.SerializeToString())\n
\n\n

More info here

\n\n

And you read the data like this:

\n\n
# Remember to generate a file name queue of you 'train.TFRecord' file path\ndef read_and_decode(filename_queue):\n  reader = tf.TFRecordReader()\n  _, serialized_example = reader.read(filename_queue)\n  features = tf.parse_single_example(\n    serialized_example,\n    dense_keys=['image_raw', 'label'],\n    # Defaults are not specified since both keys are required.\n    dense_types=[tf.string, tf.int64])\n\n  # Convert from a scalar string tensor (whose single string has\n  image = tf.decode_raw(features['image_raw'], tf.uint8)\n\n  image = tf.reshape(image, [my_cifar.n_input])\n  image.set_shape([my_cifar.n_input])\n\n  # OPTIONAL: Could reshape into a 28x28 image and apply distortions\n  # here.  Since we are not applying any distortions in this\n  # example, and the next step expects the image to be flattened\n  # into a vector, we don't bother.\n\n  # Convert from [0, 255] -> [-0.5, 0.5] floats.\n  image = tf.cast(image, tf.float32)\n  image = tf.cast(image, tf.float32) * (1. / 255) - 0.5\n\n  # Convert label from a scalar uint8 tensor to an int32 scalar.\n  label = tf.cast(features['label'], tf.int32)\n\n  return image, label\n
\n", "system": "" }, { "instruction": "How do I install TensorFlow's tensorboard?", "input": "", "output": "

The steps to install Tensorflow are here: https://www.tensorflow.org/install/

\n\n

For example, on Linux for CPU-only (no GPU), you would type this command:

\n\n
pip install -U pip\npip install tensorflow\n
\n\n

Since TensorFlow depends on TensorBoard, running the following command should not be necessary:

\n\n
pip install tensorboard\n
\n", "system": "" }, { "instruction": "What is the use of a *.pb file in TensorFlow and how does it work?", "input": "", "output": "

pb stands for protobuf. In TensorFlow, the protbuf file contains the graph definition as well as the weights of the model. Thus, a pb file is all you need to be able to run a given trained model.

\n

Given a pb file, you can load it as follows:

\n
def load_pb(path_to_pb):\n    with tf.gfile.GFile(path_to_pb, "rb") as f:\n        graph_def = tf.GraphDef()\n        graph_def.ParseFromString(f.read())\n    with tf.Graph().as_default() as graph:\n        tf.import_graph_def(graph_def, name='')\n        return graph\n
\n

Once you have loaded the graph, you can basically do anything. For instance, you can retrieve tensors of interest with

\n
input = graph.get_tensor_by_name('input:0')\noutput = graph.get_tensor_by_name('output:0')\n
\n

and use regular TensorFlow routine like:

\n
sess.run(output, feed_dict={input: some_data})\n
\n", "system": "" }, { "instruction": "Why is the accuracy for my Keras model always 0 when training?", "input": "", "output": "

Your model seems to correspond to a regression model for the following reasons:

\n\n\n\n

However, the metric that you use- metrics=['accuracy'] corresponds to a classification problem. If you want to do regression, remove metrics=['accuracy']. That is, use

\n\n
model.compile(optimizer='adam',loss='mean_squared_error')\n
\n\n

Here is a list of keras metrics for regression and classification (taken from this blog post):

\n\n
\n

Keras Regression Metrics

\n \n

\u2022Mean Squared Error: mean_squared_error, MSE or mse

\n \n

\u2022Mean Absolute Error: mean_absolute_error, MAE, mae

\n \n

\u2022Mean Absolute Percentage Error: mean_absolute_percentage_error, MAPE,\n mape

\n \n

\u2022Cosine Proximity: cosine_proximity, cosine

\n \n

Keras Classification Metrics

\n \n

\u2022Binary Accuracy: binary_accuracy, acc

\n \n

\u2022Categorical Accuracy: categorical_accuracy, acc

\n \n

\u2022Sparse Categorical Accuracy: sparse_categorical_accuracy

\n \n

\u2022Top k Categorical Accuracy: top_k_categorical_accuracy (requires you\n specify a k parameter)

\n \n

\u2022Sparse Top k Categorical Accuracy: sparse_top_k_categorical_accuracy\n (requires you specify a k parameter)

\n
\n", "system": "" }, { "instruction": "How to approach a number guessing game (with a twist) algorithm?", "input": "", "output": "

We'll combine graph-theory and probability:

\n\n

On the 1st day, build a set of all feasible solutions. Lets denote the solutions set as A1={a1(1), a1(2),...,a1(n)}.

\n\n

On the second day you can again build the solutions set A2.

\n\n

Now, for each element in A2, you'll need to check if it can be reached from each element of A1 (given x% tolerance). If so - connect A2(n) to A1(m). If it can't be reached from any node in A1(m) - you can delete this node.

\n\n

Basically we are building a connected directed acyclic graph.

\n\n

All paths in the graph are equally likely. You can find an exact solution only when there is a single edge from Am to Am+1 (from a node in Am to a node in Am+1).

\n\n

Sure, some nodes appear in more paths than other nodes. The probability for each node can be directly deduced based on the number of paths that contains this node.

\n\n

By assigning a weight to each node, which equals to the number of paths that leads to this node, there is no need to keep all history, but only the previous day.

\n\n

Also, have a look at non-negative-values linear diphantine equations - A question I asked a while ago. The accepted answer is a great way to enumarte all combos in each step.

\n", "system": "" }, { "instruction": ""Could not load dynamic library 'libcudnn.so.8'" when running tensorflow on ubuntu 20.04", "input": "", "output": "

So I had the same issue. As the comments say, it's because you need to install CUDNN. For that, there is a guide here.

\n

But as I know already your distro (Ubuntu 20.04) I can give you the command lines already:

\n
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin\nsudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600\nexport last_public_key=3bf863cc # SEE NOTE BELOW\nsudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/${last_public_key}.pub\nsudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"\nsudo apt-get update\nsudo apt-get install libcudnn8\nsudo apt-get install libcudnn8-dev\n
\n

where ${last_public_key} is the last public key (file with .pub extension) published on https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/. (At March 8th 2023 when this post was edit, it was 3bf863cc).

\n

And if you want to install a specific version, the last 2 commands would be replaced with

\n
sudo apt-get install libcudnn8=${cudnn_version}-1+${cuda_version}\nsudo apt-get install libcudnn8-dev=${cudnn_version}-1+${cuda_version}\n
\n

where\n${cudnn_version} is for example 8.2.4.* and ${cuda_version} is for example cuda11.0 (as I see you have 11.0 on the command nvidia-smi, although I have not tested it as mine was 11.4 but I guess it should work Ok)

\n", "system": "" }, { "instruction": "tensorflow on GPU: no known devices, despite cuda's deviceQuery returning a "PASS" result", "input": "", "output": "

From the log output, it looks like you are running the CPU version of TensorFlow (PyPI: tensorflow), and not the GPU version (PyPI: tensorflow-gpu). Running the GPU version would either log information about the CUDA libraries, or an error if it failed to load them or open the driver.

\n\n

If you run the following commands, you should be able to use the GPU in subsequent runs:

\n\n
$ pip uninstall tensorflow\n$ pip install tensorflow-gpu\n
\n", "system": "" }, { "instruction": "ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory", "input": "", "output": "

I downloaded cuda 10.0 from the following link\nCUDA 10.0

\n\n

Then I installed it using the following commands:

\n\n
sudo dpkg -i cuda-repo-ubuntu1804_10.0.130-1_amd64.deb\nsudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub\nsudo apt-get update\nsudo apt-get install cuda-10-0\n
\n\n

I then installed cudnn v7.5.0 for CUDA 10.0 by going to link \nCUDNN download and you need to logon using an account.

\n\n

and after choosing the correct version I downloaded via link CUDNN power link\nafter that I added the include and lib files for cudnn as follows:

\n\n
sudo cp -P cuda/targets/ppc64le-linux/include/cudnn.h /usr/local/cuda-10.0/include/\nsudo cp -P cuda/targets/ppc64le-linux/lib/libcudnn* /usr/local/cuda-10.0/lib64/\nsudo chmod a+r /usr/local/cuda-10.0/lib64/libcudnn*\n
\n\n

After modified the .bashrc for lib and path of cuda 10.0, if you do not have it you need to add them into .bashrc

\n\n
export PATH=/usr/local/cuda-10.0/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}\n
\n\n

And after all these steps, I managed to import tensorflow in python3 successfully.

\n", "system": "" }, { "instruction": "Tensorflow Allocation Memory: Allocation of 38535168 exceeds 10% of system memory", "input": "", "output": "

Try reducing batch_size attribute to a small number(like 1,2 or 3).\nExample:

\n\n
train_generator = data_generator.flow_from_directory(\n    'path_to_the_training_set',\n    target_size = (IMG_SIZE,IMG_SIZE),\n    batch_size = 2,\n    class_mode = 'categorical'\n    )\n
\n", "system": "" }, { "instruction": "What is a batch in TensorFlow?", "input": "", "output": "

Let's say you want to do digit recognition (MNIST) and you have defined your architecture of the network (CNNs). Now, you can start feeding the images from the training data one by one to the network, get the prediction (till this step it's called as doing inference), compute the loss, compute the gradient, and then update the parameters of your network (i.e. weights and biases) and then proceed with the next image ... This way of training the model is sometimes called as online learning.

\n\n

But, you want the training to be faster, the gradients to be less noisy, and also take advantage of the power of GPUs which are efficient at doing array operations (nD-arrays to be specific). So, what you instead do is feed in say 100 images at a time (the choice of this size is up to you (i.e. it's a hyperparameter) and depends on your problem too). For instance, take a look at the below picture, (Author: Martin Gorner)

\n\n

\"Batch

\n\n

Here, since you're feeding in 100 images(28x28) at a time (instead of 1 as in the online training case), the batch size is 100. Oftentimes this is called as mini-batch size or simply mini-batch.

\n\n
\n\n

Also the below picture: (Author: Martin Gorner)

\n\n

\"batch

\n\n

Now, the matrix multiplication will all just work out perfectly fine and you will also be taking advantage of the highly optimized array operations and hence achieve faster training time.

\n\n

If you observe the above picture, it doesn't matter that much whether you give 100 or 256 or 2048 or 10000 (batch size) images as long as it fits in the memory of your (GPU) hardware. You'll simply get that many predictions.

\n\n

But, please keep in mind that this batch size influences the training time, the error that you achieve, the gradient shifts etc., There is no general rule of thumb as to which batch size works out best. Just try a few sizes and pick the one which works best for you. But try not to use large batch sizes since it will overfit the data. People commonly use mini-batch sizes of 32, 64, 128, 256, 512, 1024, 2048.

\n\n
\n\n

Bonus: To get a good grasp of how crazy you can go with this batch size, please give this paper a read: weird trick for parallelizing CNNs

\n", "system": "" }, { "instruction": "Adjust Single Value within Tensor -- TensorFlow", "input": "", "output": "

UPDATE: TensorFlow 1.0 includes a tf.scatter_nd() operator, which can be used to create delta below without creating a tf.SparseTensor.

\n\n
\n\n

This is actually surprisingly tricky with the existing ops! Perhaps somebody can suggest a nicer way to wrap up the following, but here's one way to do it.

\n\n

Let's say you have a tf.constant() tensor:

\n\n
c = tf.constant([[0.0, 0.0, 0.0],\n                 [0.0, 0.0, 0.0],\n                 [0.0, 0.0, 0.0]])\n
\n\n

...and you want to add 1.0 at location [1, 1]. One way you could do this is to define a tf.SparseTensor, delta, representing the change:

\n\n
indices = [[1, 1]]  # A list of coordinates to update.\n\nvalues = [1.0]  # A list of values corresponding to the respective\n                # coordinate in indices.\n\nshape = [3, 3]  # The shape of the corresponding dense tensor, same as `c`.\n\ndelta = tf.SparseTensor(indices, values, shape)\n
\n\n

Then you can use the tf.sparse_tensor_to_dense() op to make a dense tensor from delta and add it to c:

\n\n
result = c + tf.sparse_tensor_to_dense(delta)\n\nsess = tf.Session()\nsess.run(result)\n# ==> array([[ 0.,  0.,  0.],\n#            [ 0.,  1.,  0.],\n#            [ 0.,  0.,  0.]], dtype=float32)\n
\n", "system": "" }, { "instruction": "What do I need K.clear_session() and del model for (Keras with Tensorflow-gpu)?", "input": "", "output": "

K.clear_session() is useful when you're creating multiple models in succession, such as during hyperparameter search or cross-validation. Each model you train adds nodes (potentially numbering in the thousands) to the graph. TensorFlow executes the entire graph whenever you (or Keras) call tf.Session.run() or tf.Tensor.eval(), so your models will become slower and slower to train, and you may also run out of memory. Clearing the session removes all the nodes left over from previous models, freeing memory and preventing slowdown.

\n\n
\n\n

Edit 21/06/19:

\n\n

TensorFlow is lazy-evaluated by default. TensorFlow operations aren't evaluated immediately: creating a tensor or doing some operations to it creates nodes in a dataflow graph. The results are calculated by evaluating the relevant parts of the graph in one go when you call tf.Session.run() or tf.Tensor.eval(). This is so TensorFlow can build an execution plan that allocates operations that can be performed in parallel to different devices. It can also fold adjacent nodes together or remove redundant ones (e.g. if you concatenated two tensors and later split them apart again unchanged). For more details, see https://www.tensorflow.org/guide/graphs

\n\n

All of your TensorFlow models are stored in the graph as a series of tensors and tensor operations. The basic operation of machine learning is tensor dot product - the output of a neural network is the dot product of the input matrix and the network weights. If you have a single-layer perceptron and 1,000 training samples, then each epoch creates at least 1,000 tensor operations. If you have 1,000 epochs, then your graph contains at least 1,000,000 nodes at the end, before taking into account preprocessing, postprocessing, and more complex models such as recurrent nets, encoder-decoder, attentional models, etc.

\n\n

The problem is that eventually the graph would be too large to fit into video memory (6 GB in my case), so TF would shuttle parts of the graph from video to main memory and back. Eventually it would even get too large for main memory (12 GB) and start moving between main memory and the hard disk. Needless to say, this made things incredibly, and increasingly, slow as training went on. Before developing this save-model/clear-session/reload-model flow, I calculated that, at the per-epoch rate of slowdown I experienced, my model would have taken longer than the age of the universe to finish training.

\n\n
\n

Disclaimer: I haven't used TensorFlow in almost a year, so this might have changed. I remember there being quite a few GitHub issues around this so hopefully it has since been fixed.

\n
\n", "system": "" }, { "instruction": "What are possible values for data_augmentation_options in the TensorFlow Object Detection pipeline configuration?", "input": "", "output": "

The list of options is provided in preprocessor.proto:

\n\n
NormalizeImage normalize_image = 1;\nRandomHorizontalFlip random_horizontal_flip = 2;\nRandomPixelValueScale random_pixel_value_scale = 3;\nRandomImageScale random_image_scale = 4;\nRandomRGBtoGray random_rgb_to_gray = 5;\nRandomAdjustBrightness random_adjust_brightness = 6;\nRandomAdjustContrast random_adjust_contrast = 7;\nRandomAdjustHue random_adjust_hue = 8;\nRandomAdjustSaturation random_adjust_saturation = 9;\nRandomDistortColor random_distort_color = 10;\nRandomJitterBoxes random_jitter_boxes = 11;\nRandomCropImage random_crop_image = 12;\nRandomPadImage random_pad_image = 13;\nRandomCropPadImage random_crop_pad_image = 14;\nRandomCropToAspectRatio random_crop_to_aspect_ratio = 15;\nRandomBlackPatches random_black_patches = 16;\nRandomResizeMethod random_resize_method = 17;\nScaleBoxesToPixelCoordinates scale_boxes_to_pixel_coordinates = 18;\nResizeImage resize_image = 19;\nSubtractChannelMean subtract_channel_mean = 20;\nSSDRandomCrop ssd_random_crop = 21;\nSSDRandomCropPad ssd_random_crop_pad = 22;\nSSDRandomCropFixedAspectRatio ssd_random_crop_fixed_aspect_ratio = 23;\n
\n\n

You can see the details about each option in preprocessor.py. Arguments can be provided as key-value pairs.

\n\n
  data_augmentation_options {\n    ssd_random_crop {\n    }\n  }\n  data_augmentation_options {\n    random_pixel_value_scale {\n      minval: 0.6\n    }\n  }\n
\n", "system": "" }, { "instruction": "How to set layer-wise learning rate in Tensorflow?", "input": "", "output": "

It can be achieved quite easily with 2 optimizers:

\n\n
var_list1 = [variables from first 5 layers]\nvar_list2 = [the rest of variables]\ntrain_op1 = GradientDescentOptimizer(0.00001).minimize(loss, var_list=var_list1)\ntrain_op2 = GradientDescentOptimizer(0.0001).minimize(loss, var_list=var_list2)\ntrain_op = tf.group(train_op1, train_op2)\n
\n\n

One disadvantage of this implementation is that it computes tf.gradients(.) twice inside the optimizers and thus it might not be optimal in terms of execution speed. This can be mitigated by explicitly calling tf.gradients(.), splitting the list into 2 and passing corresponding gradients to both optimizers.

\n\n

Related question: Holding variables constant during optimizer

\n\n

EDIT: Added more efficient but longer implementation:

\n\n
var_list1 = [variables from first 5 layers]\nvar_list2 = [the rest of variables]\nopt1 = tf.train.GradientDescentOptimizer(0.00001)\nopt2 = tf.train.GradientDescentOptimizer(0.0001)\ngrads = tf.gradients(loss, var_list1 + var_list2)\ngrads1 = grads[:len(var_list1)]\ngrads2 = grads[len(var_list1):]\ntran_op1 = opt1.apply_gradients(zip(grads1, var_list1))\ntrain_op2 = opt2.apply_gradients(zip(grads2, var_list2))\ntrain_op = tf.group(train_op1, train_op2)\n
\n\n

You can use tf.trainable_variables() to get all training variables and decide to select from them.\nThe difference is that in the first implementation tf.gradients(.) is called twice inside the optimizers. This may cause some redundant operations to be executed (e.g. gradients on the first layer can reuse some computations for the gradients of the following layers).

\n", "system": "" }, { "instruction": "Tensorflow installation error: not a supported wheel on this platform", "input": "", "output": "

I too got the same problem.

\n

I downloaded get-pip.py from https://bootstrap.pypa.io/get-pip.py and then ran python2.7 get-pip.py for installing pip2.7.

\n

And then ran the pip install command with python2.7 as follows.

\n

For Ubuntu/Linux:

\n
python2.7 -m pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl\n
\n

For Mac OS X:

\n
python2.7 -m pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl\n
\n

This should work just fine as it did for me :)

\n

I followed these instructions from here.

\n", "system": "" }, { "instruction": "Is Tensorflow compatible with a Windows workflow?", "input": "", "output": "

Updated 11/28/2016: Today we released the first release candidate of TensorFlow 0.12, which includes support for Windows. You can install the Python bindings using the following command in a Python shell:

\n\n
C:\\> pip install tensorflow\n
\n\n

...or, if you want GPU support:

\n\n
C:\\> pip install tensorflow-gpu\n
\n\n

You can also build TensorFlow yourself using Microsoft Visual C++ and NVCC (for the CUDA parts). The easiest way to build on Windows is currently to use the CMake build, and we will soon provide support for Bazel on Windows.

\n\n
\n\n

Previous answer: We haven't tried to build TensorFlow on Windows so far: the only supported platforms are Linux (Ubuntu) and Mac OS X, and we've only built binaries for those platforms.

\n\n

For now, on Windows, the easiest way to get started with TensorFlow would be to use Docker: http://tensorflow.org/get_started/os_setup.md#docker-based_installation

\n\n

It should become easier to add Windows support when Bazel (the build system we are using) adds support for building on Windows, which is on the roadmap for Bazel 0.3. You can see the full Bazel roadmap here.

\n\n

In the meantime, you can follow issue 17 on the TensorFlow GitHub page.

\n", "system": "" }, { "instruction": "What's the difference between a Tensorflow Keras Model and Estimator?", "input": "", "output": "

As @jaromir pointed out - estimators are deprecated and unavailable from Tensorflow 2.16. Use the Keras APIs instead. From the documentation:

\n
\n

Warning: TensorFlow 2.15 included the final release of the tf-estimator package. Estimators will not be available in TensorFlow\n2.16 or after. See the migration guide for more information about how to convert off of Estimators.

\n
\n

Below is the original answer from 2018.

\n
\n

Background

\n

The Estimators API was added to Tensorflow in Release 1.1, and provides a high-level abstraction over lower-level Tensorflow core operations. It works with an Estimator instance, which is TensorFlow's high-level representation of a complete model.

\n

\"\"

\n

Keras is similar to the Estimators API in that it abstracts deep learning model components such as layers, activation functions and optimizers, to make it easier for developers. It is a model-level library, and does not handle low-level operations, which is the job of tensor manipulation libraries, or backends. Keras supports three backends - Tensorflow, Theano and CNTK.

\n

Keras was not part of Tensorflow until Release 1.4.0 (2 Nov 2017). Now, when you use tf.keras (or talk about 'Tensorflow Keras'), you are simply using the Keras interface with the Tensorflow backend to build and train your model.

\n

\"\"

\n

So both the Estimator API and Keras API provides a high-level API over low-level core Tensorflow API, and you can use either to train your model. But in most cases, if you are working with Tensorflow, you'd want to use the Estimators API for the reasons listed below.

\n

Distribution

\n

You can conduct distributed training across multiple servers with the Estimators API, but not with Keras API.

\n

From the Tensorflow Keras Guide, it says that:

\n
\n

The Estimators API is used for training models for distributed environments.

\n
\n

And from the Tensorflow Estimators Guide, it says that:

\n
\n

You can run Estimator-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model.

\n
\n

Pre-made Estimator

\n

Whilst Keras provides abstractions that makes building your models easier, you still have to write code to build your model. With Estimators, Tensorflow provides Pre-made Estimators, which are models which you can use straight away, simply by plugging in the hyperparameters.

\n

Pre-made Estimators are similar to how you'd work with scikit-learn. For example, the tf.estimator.LinearRegressor from Tensorflow is similar to the sklearn.linear_model.LinearRegression from scikit-learn.

\n

Integration with Other Tensorflow Tools

\n

Tensorflow provides a vistualzation tool called TensorBoard that helps you visualize your graph and statistics. By using an Estimator, you can easily save summaries to be visualized with Tensorboard.

\n

Converting Keras Model to Estimator

\n

To migrate a Keras model to an Estimator, use the tf.keras.estimator.model_to_estimator method.

\n", "system": "" }, { "instruction": "MemoryError in TensorFlow; and "successful NUMA node read from SysFS had negative value (-1)" with xen", "input": "", "output": "

There is the code which prints the message \"successful NUMA node read from SysFS had negative value (-1)\", and it is not Fatal Error, it is just warning. Real error is MemoryError in your File \"model_new.py\", line 85, in <module>. We need more sources to check this error. Try to make your model smaller or run on server with more RAM.

\n\n
\n\n

About NUMA node warning:

\n\n

https://github.com/tensorflow/tensorflow/blob/e4296aefff97e6edd3d7cee9a09b9dd77da4c034/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc#L855

\n\n
// Attempts to read the NUMA node corresponding to the GPU device's PCI bus out\n// of SysFS. Returns -1 if it cannot...\nstatic int TryToReadNumaNode(const string &pci_bus_id, int device_ordinal) \n{...\n  string filename =\n      port::Printf(\"/sys/bus/pci/devices/%s/numa_node\", pci_bus_id.c_str());\n  FILE *file = fopen(filename.c_str(), \"r\");\n  if (file == nullptr) {\n    LOG(ERROR) << \"could not open file to read NUMA node: \" << filename\n               << \"\\nYour kernel may have been built without NUMA support.\";\n    return kUnknownNumaNode;\n  } ...\n  if (port::safe_strto32(content, &value)) {\n    if (value < 0) {  // See http://b/18228951 for details on this path.\n      LOG(INFO) << \"successful NUMA node read from SysFS had negative value (\"\n                << value << \"), but there must be at least one NUMA node\"\n                            \", so returning NUMA node zero\";\n      fclose(file);\n      return 0;\n    }\n
\n\n

TensorFlow was able to open /sys/bus/pci/devices/%s/numa_node file where %s is id of GPU PCI card (string pci_bus_id = CUDADriver::GetPCIBusID(device_)). Your PC is not multisocket, there is only single CPU socket with 8-core Xeon E5-2670 installed, so this id should be '0' (single NUMA node is numbered as 0 in Linux), but the error message says that it was -1 value in this file!

\n\n

So, we know that sysfs is mounted into /sys, there is numa_node special file, CONFIG_NUMA is enabled in your Linux Kernel config (zgrep NUMA /boot/config* /proc/config*). Actually it is enabled: CONFIG_NUMA=y - in the deb of your x86_64 4.4.0-78-generic kernel

\n\n

The special file numa_node is documented in https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-bus-pci (is the ACPI of your PC wrong?)

\n\n
What:       /sys/bus/pci/devices/.../numa_node\nDate:       Oct 2014\nContact:    Prarit Bhargava <prarit@redhat.com>\nDescription:\n        This file contains the NUMA node to which the PCI device is\n        attached, or -1 if the node is unknown.  The initial value\n        comes from an ACPI _PXM method or a similar firmware\n        source.  If that is missing or incorrect, this file can be\n        written to override the node.  In that case, please report\n        a firmware bug to the system vendor.  Writing to this file\n        taints the kernel with TAINT_FIRMWARE_WORKAROUND, which\n        reduces the supportability of your system.\n
\n\n

There is quick (kludge) workaround for this error: find the numa_node of your GPU and with root account do after every boot this command where NNNNN is the PCI id of your card (search in lspci output and in /sys/bus/pci/devices/ directory)

\n\n
echo 0 | sudo tee -a /sys/bus/pci/devices/NNNNN/numa_node\n
\n\n

Or just echo it into every such file, it should be rather safe:

\n\n
for a in /sys/bus/pci/devices/*; do echo 0 | sudo tee -a $a/numa_node; done\n
\n\n

Also your lshw shows that it is not PC, but Xen virtual guest. There is something wrong between Xen platform (ACPI) emulation and Linux PCI bus NUMA-support code.

\n", "system": "" }, { "instruction": "What is num_units in tensorflow BasicLSTMCell?", "input": "", "output": "

From this brilliant article

\n
\n

num_units can be interpreted as the analogy of hidden layer from the feed forward neural network. The number of nodes in hidden layer of a feed forward neural network is equivalent to num_units number of LSTM units in a LSTM cell at every time step of the network.

\n
\n

See the image there too!

\n

\"enter

\n", "system": "" }, { "instruction": "How to fix "AttributeError: module 'tensorflow' has no attribute 'get_default_graph'"?", "input": "", "output": "

Please try:

\n\n

from tensorflow.keras.models import Sequential

\n\n

instead of

\n\n

from keras.models import Sequential

\n", "system": "" }, { "instruction": "Tensorflow r1.0 : could not a find a version that satisfies the requirement tensorflow", "input": "", "output": "

I was in same problem.

\n\n

Below command solved my problem

\n\n
pip3 install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0-py3-none-any.whl\n
\n\n

to find the list of all the urls based on the python version and CPU or GPU only refer to:\nhttps://www.tensorflow.org/install/pip

\n", "system": "" }, { "instruction": "How does one debug NaN values in TensorFlow?", "input": "", "output": "

There are a couple of reasons WHY you can get a NaN-result, often it is because of too high a learning rate but plenty other reasons are possible like for example corrupt data in your input-queue or a log of 0 calculation.

\n\n

Anyhow, debugging with a print as you describe cannot be done by a simple print (as this would result only in the printing of the tensor-information inside the graph and not print any actual values).

\n\n

However, if you use tf.print as an op in bulding the graph (tf.print) then when the graph gets executed you will get the actual values printed (and it IS a good exercise to watch these values to debug and understand the behavior of your net).

\n\n

However, you are using the print-statement not entirely in the correct manner. This is an op, so you need to pass it a tensor and request a result-tensor that you need to work with later on in the executing graph. Otherwise the op is not going to be executed and no printing occurs. Try this:

\n\n
Z = tf.sqrt(Delta_tilde)\nZ = tf.Print(Z,[Z], message=\"my Z-values:\") # <-------- TF PRINT STATMENT\nZ = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)\nZ = tf.pow(Z, 2.0)\n
\n", "system": "" }, { "instruction": "TensorBoard - Plot training and validation losses on the same graph?", "input": "", "output": "

The work-around I have been doing is to use two SummaryWriter with different log dir for training set and cross-validation set respectively. And you will see something like this:

\n\n

\"enter

\n", "system": "" }, { "instruction": "Loss function for class imbalanced binary classifier in Tensor flow", "input": "", "output": "

You can add class weights to the loss function, by multiplying logits. \nRegular cross entropy loss is this:

\n\n
loss(x, class) = -log(exp(x[class]) / (\\sum_j exp(x[j])))\n               = -x[class] + log(\\sum_j exp(x[j]))\n
\n\n

in weighted case:

\n\n
loss(x, class) = weights[class] * -x[class] + log(\\sum_j exp(weights[class] * x[j]))\n
\n\n

So by multiplying logits, you are re-scaling predictions of each class by its class weight.

\n\n

For example:

\n\n
ratio = 31.0 / (500.0 + 31.0)\nclass_weight = tf.constant([ratio, 1.0 - ratio])\nlogits = ... # shape [batch_size, 2]\nweighted_logits = tf.mul(logits, class_weight) # shape [batch_size, 2]\nxent = tf.nn.softmax_cross_entropy_with_logits(\n  weighted_logits, labels, name=\"xent_raw\")\n
\n\n

There is a standard losses function now that supports weights per batch:

\n\n
tf.losses.sparse_softmax_cross_entropy(labels=label, logits=logits, weights=weights)\n
\n\n

Where weights should be transformed from class weights to a weight per example (with shape [batch_size]). See documentation here.

\n", "system": "" }, { "instruction": "Tensorflow._api.v2.train has no attribute 'AdamOptimizer'", "input": "", "output": "
tf.train.AdamOptimizer() => tf.optimizers.Adam()\n
\n\n

From https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers

\n", "system": "" }, { "instruction": "Tensorflow Compile Runs For A Long Time", "input": "", "output": "

Unfortunately, some programs can take a long time to compile. A couple of hours of compilation is not strange for tensorflow on your setup.

\n

There are reports of it taking 50 minutes on a considerably faster machine

\n

A solution to this problem is to use pre-compiled binaries that are available with pip, instructions can be found here: https://www.tensorflow.org/install/pip.html

\n

Basically you can do this:

\n
pip install tensorflow\n
\n

If you require a specific older version, like 1.15, you can do this:

\n
pip install tensorflow==1.15\n
\n

For gpu support you add [and-cuda] to the package name, like this:

\n
pip install tensorflow[and-cuda]\n
\n

And:

\n
pip install tensorflow[and-cuda]==1.15\n
\n", "system": "" }, { "instruction": "What is the difference between a sigmoid followed by the cross entropy and sigmoid_cross_entropy_with_logits in TensorFlow?", "input": "", "output": "

You're confusing the cross-entropy for binary and multi-class problems.

\n\n

Multi-class cross-entropy

\n\n

The formula that you use is correct and it directly corresponds to tf.nn.softmax_cross_entropy_with_logits:

\n\n\n\n
-tf.reduce_sum(p * tf.log(q), axis=1)\n
\n\n

p and q are expected to be probability distributions over N classes. In particular, N can be 2, as in the following example:

\n\n
p = tf.placeholder(tf.float32, shape=[None, 2])\nlogit_q = tf.placeholder(tf.float32, shape=[None, 2])\nq = tf.nn.softmax(logit_q)\n\nfeed_dict = {\n  p: [[0, 1],\n      [1, 0],\n      [1, 0]],\n  logit_q: [[0.2, 0.8],\n            [0.7, 0.3],\n            [0.5, 0.5]]\n}\n\nprob1 = -tf.reduce_sum(p * tf.log(q), axis=1)\nprob2 = tf.nn.softmax_cross_entropy_with_logits(labels=p, logits=logit_q)\nprint(prob1.eval(feed_dict))  # [ 0.43748799  0.51301527  0.69314718]\nprint(prob2.eval(feed_dict))  # [ 0.43748799  0.51301527  0.69314718]\n
\n\n

Note that q is computing tf.nn.softmax, i.e. outputs a probability distribution. So it's still multi-class cross-entropy formula, only for N = 2.

\n\n

Binary cross-entropy

\n\n

This time the correct formula is

\n\n
p * -tf.log(q) + (1 - p) * -tf.log(1 - q)\n
\n\n

Though mathematically it's a partial case of the multi-class case, the meaning of p and q is different. In the simplest case, each p and q is a number, corresponding to a probability of the class A.

\n\n

Important: Don't get confused by the common p * -tf.log(q) part and the sum. Previous p was a one-hot vector, now it's a number, zero or one. Same for q - it was a probability distribution, now's it's a number (probability).

\n\n

If p is a vector, each individual component is considered an independent binary classification. See this answer that outlines the difference between softmax and sigmoid functions in tensorflow. So the definition p = [0, 0, 0, 1, 0] doesn't mean a one-hot vector, but 5 different features, 4 of which are off and 1 is on. The definition q = [0.2, 0.2, 0.2, 0.2, 0.2] means that each of 5 features is on with 20% probability.

\n\n

This explains the use of sigmoid function before the cross-entropy: its goal is to squash the logit to [0, 1] interval.

\n\n

The formula above still holds for multiple independent features, and that's exactly what tf.nn.sigmoid_cross_entropy_with_logits computes:

\n\n
p = tf.placeholder(tf.float32, shape=[None, 5])\nlogit_q = tf.placeholder(tf.float32, shape=[None, 5])\nq = tf.nn.sigmoid(logit_q)\n\nfeed_dict = {\n  p: [[0, 0, 0, 1, 0],\n      [1, 0, 0, 0, 0]],\n  logit_q: [[0.2, 0.2, 0.2, 0.2, 0.2],\n            [0.3, 0.3, 0.2, 0.1, 0.1]]\n}\n\nprob1 = -p * tf.log(q)\nprob2 = p * -tf.log(q) + (1 - p) * -tf.log(1 - q)\nprob3 = p * -tf.log(tf.sigmoid(logit_q)) + (1-p) * -tf.log(1-tf.sigmoid(logit_q))\nprob4 = tf.nn.sigmoid_cross_entropy_with_logits(labels=p, logits=logit_q)\nprint(prob1.eval(feed_dict))\nprint(prob2.eval(feed_dict))\nprint(prob3.eval(feed_dict))\nprint(prob4.eval(feed_dict))\n
\n\n

You should see that the last three tensors are equal, while the prob1 is only a part of cross-entropy, so it contains correct value only when p is 1:

\n\n
[[ 0.          0.          0.          0.59813893  0.        ]\n [ 0.55435514  0.          0.          0.          0.        ]]\n[[ 0.79813886  0.79813886  0.79813886  0.59813887  0.79813886]\n [ 0.5543552   0.85435522  0.79813886  0.74439669  0.74439669]]\n[[ 0.7981388   0.7981388   0.7981388   0.59813893  0.7981388 ]\n [ 0.55435514  0.85435534  0.7981388   0.74439663  0.74439663]]\n[[ 0.7981388   0.7981388   0.7981388   0.59813893  0.7981388 ]\n [ 0.55435514  0.85435534  0.7981388   0.74439663  0.74439663]]\n
\n\n

Now it should be clear that taking a sum of -p * tf.log(q) along axis=1 doesn't make sense in this setting, though it'd be a valid formula in multi-class case.

\n", "system": "" }, { "instruction": "Reset weights in Keras layer", "input": "", "output": "

Save the initial weights right after compiling the model but before training it:

\n\n
model.save_weights('model.h5')\n
\n\n

and then after training, \"reset\" the model by reloading the initial weights:

\n\n
model.load_weights('model.h5')\n
\n\n

This gives you an apples to apples model to compare different data sets and should be quicker than recompiling the entire model.

\n", "system": "" }, { "instruction": "How to make a custom activation function with only Python in Tensorflow?", "input": "", "output": "

Yes There is!

\n

Credit:\nIt was hard to find the information and get it working but here is an example copying from the principles and code found here and here.

\n

Requirements:\nBefore we start, there are two requirement for this to be able to succeed. First you need to be able to write your activation as a function on numpy arrays. Second you have to be able to write the derivative of that function either as a function in Tensorflow (easier) or in the worst case scenario as a function on numpy arrays.

\n

Writing Activation function:

\n

So let's take for example this function which we would want to use an activation function:

\n
def spiky(x):\n    r = x % 1\n    if r <= 0.5:\n        return r\n    else:\n        return 0\n
\n

Which look as follows:\n\"Spiky

\n

The first step is making it into a numpy function, this is easy:

\n
import numpy as np\nnp_spiky = np.vectorize(spiky)\n
\n

Now we should write its derivative.

\n

Gradient of Activation:\nIn our case it is easy, it is 1 if x mod 1 < 0.5 and 0 otherwise. So:

\n
def d_spiky(x):\n    r = x % 1\n    if r <= 0.5:\n        return 1\n    else:\n        return 0\nnp_d_spiky = np.vectorize(d_spiky)\n
\n

Now for the hard part of making a TensorFlow function out of it.

\n

Making a numpy fct to a tensorflow fct:\nWe will start by making np_d_spiky into a tensorflow function. There is a function in tensorflow tf.py_func(func, inp, Tout, stateful=stateful, name=name) [doc] which transforms any numpy function to a tensorflow function, so we can use it:

\n
import tensorflow as tf\nfrom tensorflow.python.framework import ops\n\nnp_d_spiky_32 = lambda x: np_d_spiky(x).astype(np.float32)\n\n\ndef tf_d_spiky(x,name=None):\n    with tf.name_scope(name, "d_spiky", [x]) as name:\n        y = tf.py_func(np_d_spiky_32,\n                        [x],\n                        [tf.float32],\n                        name=name,\n                        stateful=False)\n        return y[0]\n
\n

tf.py_func acts on lists of tensors (and returns a list of tensors), that is why we have [x] (and return y[0]). The stateful option is to tell tensorflow whether the function always gives the same output for the same input (stateful = False) in which case tensorflow can simply the tensorflow graph, this is our case and will probably be the case in most situations. One thing to be careful of at this point is that numpy used float64 but tensorflow uses float32 so you need to convert your function to use float32 before you can convert it to a tensorflow function otherwise tensorflow will complain. This is why we need to make np_d_spiky_32 first.

\n

What about the Gradients? The problem with only doing the above is that even though we now have tf_d_spiky which is the tensorflow version of np_d_spiky, we couldn't use it as an activation function if we wanted to because tensorflow doesn't know how to calculate the gradients of that function.

\n

Hack to get Gradients: As explained in the sources mentioned above, there is a hack to define gradients of a function using tf.RegisterGradient [doc] and tf.Graph.gradient_override_map [doc]. Copying the code from harpone we can modify the tf.py_func function to make it define the gradient at the same time:

\n
def py_func(func, inp, Tout, stateful=True, name=None, grad=None):\n    \n    # Need to generate a unique name to avoid duplicates:\n    rnd_name = 'PyFuncGrad' + str(np.random.randint(0, 1E+8))\n    \n    tf.RegisterGradient(rnd_name)(grad)  # see _MySquareGrad for grad example\n    g = tf.get_default_graph()\n    with g.gradient_override_map({"PyFunc": rnd_name}):\n        return tf.py_func(func, inp, Tout, stateful=stateful, name=name)\n
\n

Now we are almost done, the only thing is that the grad function we need to pass to the above py_func function needs to take a special form. It needs to take in an operation, and the previous gradients before the operation and propagate the gradients backward after the operation.

\n

Gradient Function: So for our spiky activation function that is how we would do it:

\n
def spikygrad(op, grad):\n    x = op.inputs[0]\n\n    n_gr = tf_d_spiky(x)\n    return grad * n_gr  \n
\n

The activation function has only one input, that is why x = op.inputs[0]. If the operation had many inputs, we would need to return a tuple, one gradient for each input. For example if the operation was a-bthe gradient with respect to a is +1 and with respect to b is -1 so we would have return +1*grad,-1*grad. Notice that we need to return tensorflow functions of the input, that is why need tf_d_spiky, np_d_spiky would not have worked because it cannot act on tensorflow tensors. Alternatively we could have written the derivative using tensorflow functions:

\n
def spikygrad2(op, grad):\n    x = op.inputs[0]\n    r = tf.mod(x,1)\n    n_gr = tf.to_float(tf.less_equal(r, 0.5))\n    return grad * n_gr  \n
\n

Combining it all together: Now that we have all the pieces, we can combine them all together:

\n
np_spiky_32 = lambda x: np_spiky(x).astype(np.float32)\n\ndef tf_spiky(x, name=None):\n    \n    with tf.name_scope(name, "spiky", [x]) as name:\n        y = py_func(np_spiky_32,\n                        [x],\n                        [tf.float32],\n                        name=name,\n                        grad=spikygrad)  # <-- here's the call to the gradient\n        return y[0]\n
\n

And now we are done. And we can test it.

\n

Test:

\n
with tf.Session() as sess:\n\n    x = tf.constant([0.2,0.7,1.2,1.7])\n    y = tf_spiky(x)\n    tf.initialize_all_variables().run()\n    \n    print(x.eval(), y.eval(), tf.gradients(y, [x])[0].eval())\n
\n
\n

[ 0.2 0.69999999 1.20000005 1.70000005] [ 0.2 0. 0.20000005 0.] [ 1. 0. 1. 0.]

\n
\n

Success!

\n", "system": "" }, { "instruction": "Cuda 12 + tf-nightly 2.12: Could not find cuda drivers on your machine, GPU will not be used, while every checking is fine and in torch it works", "input": "", "output": "

I think that, as of March 2023, the only tensorflow distribution for cuda 12 is the docker package from NVIDIA.

\n

A tf package for cuda 12 should show the following info

\n
>>> tf.sysconfig.get_build_info() \nOrderedDict([('cpu_compiler', '/usr/bin/x86_64-linux-gnu-gcc-11'), \n('cuda_compute_capabilities', ['compute_86']), \n('cuda_version', '12.0'), ('cudnn_version', '8'), \n('is_cuda_build', True), ('is_rocm_build', False), ('is_tensorrt_build', True)])\n
\n

But if we run tf.sysconfig.get_build_info() on any tensorflow package installed via pip, it stills tells that cuda_version is 11.x

\n

So your alternatives are:

\n\n", "system": "" }, { "instruction": "Keras: change learning rate", "input": "", "output": "

You can change the learning rate as follows:

\n\n
from keras import backend as K\nK.set_value(model.optimizer.learning_rate, 0.001)\n
\n\n

Included into your complete example it looks as follows:

\n\n
from keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras import backend as K\nimport keras\nimport numpy as np\n\nmodel = Sequential()\n\nmodel.add(Dense(1, input_shape=(10,)))\n\noptimizer = keras.optimizers.Adam(lr=0.01)\nmodel.compile(loss='mse', optimizer=optimizer)\n\nprint(\"Learning rate before first fit:\", model.optimizer.learning_rate.numpy())\n\nmodel.fit(np.random.randn(50,10), np.random.randn(50), epochs=50, verbose=0)\n\n# Change learning rate to 0.001 and train for 50 more epochs\nK.set_value(model.optimizer.learning_rate, 0.001)\nprint(\"Learning rate before second fit:\", model.optimizer.learning_rate.numpy())\n\nmodel.fit(np.random.randn(50,10), \n          np.random.randn(50), \n          initial_epoch=50, \n          epochs=50,\n          verbose=0)\n
\n\n

I've just tested this with keras 2.3.1. Not sure why the approach didn't seem to work for you.

\n", "system": "" }, { "instruction": "How to extract data/labels back from TensorFlow dataset", "input": "", "output": "

In case your tf.data.Dataset is batched, the following code will retrieve all the y labels:

\n
y = np.concatenate([y for x, y in ds], axis=0)\n
\n

Quick explanation: [y for x, y in ds] is known as \u201clist comprehension\u201d in python. If dataset is batched, this expression will loop thru each batch and put each batch y (a TF 1D tensor) in the list, and return it. Then, np.concatenate will take this list of 1-D tensor (implicitly casting to numpy) and stack it in the 0-axis to produce a single long vector. In summary, it is just converting a bunch of 1-d little vector into one long vector.

\n

Note: if your y is more complex, this answer will need some minor modification.

\n", "system": "" }, { "instruction": "Tensorflow estimator ValueError: logits and labels must have the same shape ((?, 1) vs (?,))", "input": "", "output": "

You should reshape your labels as 2d-tensor (the first dimension will be the batch dimension and the second the scalar label):

\n\n
# Our vectorized labels\ny_train = np.asarray(train_labels).astype('float32').reshape((-1,1))\ny_test = np.asarray(test_labels).astype('float32').reshape((-1,1))\n
\n", "system": "" }, { "instruction": "How to Properly Combine TensorFlow's Dataset API and Keras?", "input": "", "output": "

Update June 09, 2018

\n\n\n\n\n\n
# Load mnist training data\n(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()\ntraining_set = tfdata_generator(x_train, y_train,is_training=True)\n\nmodel = # your keras model here              \nmodel.fit(\n    training_set.make_one_shot_iterator(),\n    steps_per_epoch=len(x_train) // 128,\n    epochs=5,\n    verbose = 1)\n
\n\n\n\n\n\n
def tfdata_generator(images, labels, is_training, batch_size=128):\n  '''Construct a data generator using `tf.Dataset`. '''\n\n  def map_fn(image, label):\n      '''Preprocess raw data to trainable input. '''\n    x = tf.reshape(tf.cast(image, tf.float32), (28, 28, 1))\n    y = tf.one_hot(tf.cast(label, tf.uint8), _NUM_CLASSES)\n    return x, y\n\n  dataset = tf.data.Dataset.from_tensor_slices((images, labels))\n\n  if is_training:\n    dataset = dataset.shuffle(1000)  # depends on sample size\n  dataset = dataset.map(map_fn)\n  dataset = dataset.batch(batch_size)\n  dataset = dataset.repeat()\n  dataset = dataset.prefetch(tf.contrib.data.AUTOTUNE)\n\n  return dataset\n
\n\n

Old Solution:

\n\n

In addition to @Yu-Yang's answer, you can also modify tf.data.Dataset to become a generator for fit_generator as following

\n\n
from tensorflow.contrib.learn.python.learn.datasets import mnist\n\ndata   = mnist.load_mnist()\nmodel  = # your Keras model\nmodel.fit_generator(generator = tfdata_generator(data.train.images, data.train.labels),\n                    steps_per_epoch=200,\n                    workers = 0 , # This is important\n                    verbose = 1)\n\n\ndef tfdata_generator(images, labels, batch_size=128, shuffle=True,):\n    def map_func(image, label):\n        '''A transformation function'''\n        x_train = tf.reshape(tf.cast(image, tf.float32), image_shape)\n        y_train = tf.one_hot(tf.cast(label, tf.uint8), num_classes)\n        return [x_train, y_train]\n\n    dataset  = tf.data.Dataset.from_tensor_slices((images, labels))\n    dataset  = dataset.map(map_func)\n    dataset  = dataset.shuffle().batch(batch_size).repeat()\n    iterator = dataset.make_one_shot_iterator()\n\n    next_batch = iterator.get_next()\n    while True:\n        yield K.get_session().run(next_batch)\n
\n", "system": "" }, { "instruction": "Understanding the ResourceExhaustedError: OOM when allocating tensor with shape", "input": "", "output": "

Let's divide the issues one by one:

\n\n

About tensorflow to allocate all memory in advance, you can use following code snippet to let tensorflow allocate memory whenever it is needed. So that you can understand how the things are going.

\n\n
gpu_options = tf.GPUOptions(allow_growth=True)\nsession = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options))\n
\n\n

This works equally with tf.Session() instead of tf.InteractiveSession() if you prefer.

\n\n

Second thing about the sizes, \nAs there is no information about your network size, we cannot estimate what is going wrong. However, you can alternatively debug step by step all the network. For example, create a network only with one layer, get its output, create session and feed values once and visualize how much memory you consume. Iterate this debugging session until you see the point where you are going out of memory.

\n\n

Please be aware that 3840 x 155229 output is really, REALLY a big output. It means ~600M neurons, and ~2.22GB per one layer only. If you have any similar size layers, all of them will add up to fill your GPU memory pretty fast.

\n\n

Also, this is only for forward direction, if you are using this layer for training, the back propagation and layers added by optimizer will multiply this size by 2. So, for training you consume ~5 GB just for output layer.

\n\n

I suggest you to revise your network and try to reduce batch size / parameter counts to fit your model to GPU

\n", "system": "" }, { "instruction": "What is the purpose of tf.global_variables_initializer?", "input": "", "output": "

A more complete description is given here.

\n\n

Only after running tf.global_variables_initializer() in a session will your variables hold the values you told them to hold when you declare them (tf.Variable(tf.zeros(...)), tf.Variable(tf.random_normal(...)),...).

\n\n

From the TF doc :

\n\n
\n

Calling tf.Variable() adds several ops to the graph:

\n \n \n
\n\n

And also:

\n\n
\n

Variable initializers must be run explicitly before other ops in your\n model can be run. The easiest way to do that is to add an op that runs\n all the variable initializers, and run that op before using the model.

\n
\n", "system": "" }, { "instruction": ""freeze" some variables/scopes in tensorflow: stop_gradient vs passing variables to minimize", "input": "", "output": "

The easiest way to achieve this, as you mention in your question, is to create two optimizer operations using separate calls to opt.minimize(cost, ...). By default, the optimizer will use all of the variables in tf.trainable_variables(). If you want to filter the variables to a particular scope, you can use the optional scope argument to tf.get_collection() as follows:

\n\n
optimizer = tf.train.AdagradOptimzer(0.01)\n\nfirst_train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,\n                                     \"scope/prefix/for/first/vars\")\nfirst_train_op = optimizer.minimize(cost, var_list=first_train_vars)\n\nsecond_train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,\n                                      \"scope/prefix/for/second/vars\")                     \nsecond_train_op = optimizer.minimize(cost, var_list=second_train_vars)\n
\n", "system": "" }, { "instruction": "Tensorflow python : Accessing individual elements in a tensor", "input": "", "output": "

There are two main ways to access subsets of the elements in a tensor, either of which should work for your example.

\n\n
    \n
  1. Use the indexing operator (based on tf.slice()) to extract a contiguous slice from the tensor.

    \n\n
    input = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])\n\noutput = input[0, :]\nprint sess.run(output)  # ==> [1 2 3]\n
    \n\n

    The indexing operator supports many of the same slice specifications as NumPy does.

  2. \n
  3. Use the tf.gather() op to select a non-contiguous slice from the tensor.

    \n\n
    input = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])\n\noutput = tf.gather(input, 0)\nprint sess.run(output)  # ==> [1 2 3]\n\noutput = tf.gather(input, [0, 2])\nprint sess.run(output)  # ==> [[1 2 3] [7 8 9]]\n
    \n\n

    Note that tf.gather() only allows you to select whole slices in the 0th dimension (whole rows in the example of a matrix), so you may need to tf.reshape() or tf.transpose() your input to obtain the appropriate elements.

  4. \n
\n", "system": "" }, { "instruction": "Difference between Keras model.save() and model.save_weights()?", "input": "", "output": "

save() saves the weights and the model structure to a single HDF5 file. I believe it also includes things like the optimizer state. Then you can use that HDF5 file with load() to reconstruct the whole model, including weights.

\n\n

save_weights() only saves the weights to HDF5 and nothing else. You need extra code to reconstruct the model from a JSON file.

\n", "system": "" }, { "instruction": "NotImplementedError: Layers with arguments in `__init__` must override `get_config`", "input": "", "output": "

It's not a bug, it's a feature.

\n\n

This error lets you know that TF can't save your model, because it won't be able to load it.
\nSpecifically, it won't be able to reinstantiate your custom Layer classes: encoder and decoder.

\n\n

To solve this, just override their get_config method according to the new arguments you've added.

\n\n
\n

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

\n
\n\n
\n\n

For example, if your encoder class looks something like this:

\n\n
class encoder(tf.keras.layers.Layer):\n\n    def __init__(\n        self,\n        vocab_size, num_layers, units, d_model, num_heads, dropout,\n        **kwargs,\n    ):\n        super().__init__(**kwargs)\n        self.vocab_size = vocab_size\n        self.num_layers = num_layers\n        self.units = units\n        self.d_model = d_model\n        self.num_heads = num_heads\n        self.dropout = dropout\n\n    # Other methods etc.\n
\n\n

then you only need to override this method:

\n\n
    def get_config(self):\n\n        config = super().get_config().copy()\n        config.update({\n            'vocab_size': self.vocab_size,\n            'num_layers': self.num_layers,\n            'units': self.units,\n            'd_model': self.d_model,\n            'num_heads': self.num_heads,\n            'dropout': self.dropout,\n        })\n        return config\n
\n\n

When TF sees this (for both classes), you will be able to save the model.

\n\n

Because now when the model is loaded, TF will be able to reinstantiate the same layer from config.

\n\n
\n\n

Layer.from_config's source code may give a better sense of how it works:

\n\n
@classmethod\ndef from_config(cls, config):\n  return cls(**config)\n
\n", "system": "" }, { "instruction": "AttributeError: module 'tensorflow' has no attribute 'app'", "input": "", "output": "

try using import tensorflow.compat.v1 as tf

\n", "system": "" }, { "instruction": "UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually", "input": "", "output": "

Intro

\n\n

I'd like to add to olejorgenb's answer - for a specific scenario, where you don't want to train the model, just use it (e.g. in production).

\n\n

\"Compile\" means \"prepare for training\", which includes mainly setting up the optimizer. It could also have been saved before, and then you can continue the \"same\" training after loading the saved model.

\n\n

The fix

\n\n

But, what about the scenario - I want to just run the model? Well, use the compile=False argument to load_model like that:

\n\n
trained_model = load_model('model.h5', compile=False)\n
\n\n

You won't be able to .fit() this model without using trained_model.compile(...) first, but most importantly - the warning will go away.

\n\n

Misc Notes

\n\n

Btw, in my Keras version, the argument include_optimizer has a default of True. This should work also for trainig callbacks like Checkpoint. This means, when loading a model saved by Keras, you can usually count on the optimizer being included (except for the situation: see Hull Gasper's answer).

\n\n

But, when you have a model which was not trained by Keras (e.g. when converting a model trained by Darknet), the model is saved un-compiled. This produces the warning, and you can get rid of it in the way described above.

\n", "system": "" }, { "instruction": "ImportError: Could not import the Python Imaging Library (PIL) required to load image files on tensorflow", "input": "", "output": "

pip install pillow

\n\n

Then replace \nfrom IPython.display import display, Image\nwith\nfrom IPython.display import display\nfrom PIL import Image

\n", "system": "" }, { "instruction": "gensim Doc2Vec vs tensorflow Doc2Vec", "input": "", "output": "

Old question, but an answer would be useful for future visitors. So here are some of my thoughts.

\n\n

There are some problems in the tensorflow implementation:

\n\n\n\n

So for the OP's listed questions:

\n\n
    \n
  1. This implementation of doc2vec in tensorflow is working and correct in its own way, but it is different from both the gensim implementation and the paper.
  2. \n
  3. window is 1-side size as said above. If document size is less than context size, then the smaller one would be use.
  4. \n
  5. There are many reasons why gensim implementation is faster. First, gensim was optimized heavily, all operations are faster than naive python operations, especially data I/O. Second, some preprocessing steps such as min_count filtering in gensim would reduce the dataset size. More importantly, gensim uses negative_sampling_loss, which is much faster than sampled_softmax_loss, I guess this is the main reason.
  6. \n
  7. Is it easier to find somethings when there are many of them? Just kidding ;-)
    \nIt's true that there are many solutions in this non-convex optimization problem, so the model would just find a local optimum. Interestingly, in neural network, most local optima are \"good enough\". It has been observed that stochastic gradient descent seems to find better local optima than larger batch gradient descent, although this is still a riddle in current research.
  8. \n
\n", "system": "" }, { "instruction": "How can I solve 'ran out of gpu memory' in TensorFlow", "input": "", "output": "

I was encountering out of memory errors when training a small CNN on a GTX 970. Through somewhat of a fluke, I discovered that telling TensorFlow to allocate memory on the GPU as needed (instead of up front) resolved all my issues. This can be accomplished using the following Python code:

\n\n
    config = tf.ConfigProto()\n    config.gpu_options.allow_growth = True\n    sess = tf.Session(config=config)\n
\n\n

Previously, TensorFlow would pre-allocate ~90% of GPU memory. For some unknown reason, this would later result in out-of-memory errors even though the model could fit entirely in GPU memory. By using the above code, I no longer have OOM errors.

\n\n

Note: If the model is too big to fit in GPU memory, this probably won't help!

\n", "system": "" }, { "instruction": "Tensorflow read images with labels", "input": "", "output": "

Using slice_input_producer provides a solution which is much cleaner. Slice Input Producer allows us to create an Input Queue containing arbitrarily many separable values. This snippet of the question would look like this:

\n\n
def read_labeled_image_list(image_list_file):\n    \"\"\"Reads a .txt file containing pathes and labeles\n    Args:\n       image_list_file: a .txt file with one /path/to/image per line\n       label: optionally, if set label will be pasted after each line\n    Returns:\n       List with all filenames in file image_list_file\n    \"\"\"\n    f = open(image_list_file, 'r')\n    filenames = []\n    labels = []\n    for line in f:\n        filename, label = line[:-1].split(' ')\n        filenames.append(filename)\n        labels.append(int(label))\n    return filenames, labels\n\ndef read_images_from_disk(input_queue):\n    \"\"\"Consumes a single filename and label as a ' '-delimited string.\n    Args:\n      filename_and_label_tensor: A scalar string tensor.\n    Returns:\n      Two tensors: the decoded image, and the string label.\n    \"\"\"\n    label = input_queue[1]\n    file_contents = tf.read_file(input_queue[0])\n    example = tf.image.decode_png(file_contents, channels=3)\n    return example, label\n\n# Reads pfathes of images together with their labels\nimage_list, label_list = read_labeled_image_list(filename)\n\nimages = ops.convert_to_tensor(image_list, dtype=dtypes.string)\nlabels = ops.convert_to_tensor(label_list, dtype=dtypes.int32)\n\n# Makes an input queue\ninput_queue = tf.train.slice_input_producer([images, labels],\n                                            num_epochs=num_epochs,\n                                            shuffle=True)\n\nimage, label = read_images_from_disk(input_queue)\n\n# Optional Preprocessing or Data Augmentation\n# tf.image implements most of the standard image augmentation\nimage = preprocess_image(image)\nlabel = preprocess_label(label)\n\n# Optional Image and Label Batching\nimage_batch, label_batch = tf.train.batch([image, label],\n                                          batch_size=batch_size)\n
\n\n

See also the generic_input_producer from the TensorVision examples for full input-pipeline.

\n", "system": "" }, { "instruction": "How do I start tensorflow docker jupyter notebook", "input": "", "output": "

For a Linux host Robert Graves answer will work, but for Mac OS X or Windows there is more to be done because docker runs in a virtual machine.

\n\n

So to begin launch the docker shell (or any shell if you are using Linux) and run the following command to launch a new TensorFlow container:

\n\n
docker run -p 8888:8888 -p 6006:6006 b.gcr.io/tensorflow/tensorflow ./run_jupyter.sh\n
\n\n

Then for Mac OS X and Windows you need to do the following only once:

\n\n
    \n
  1. Open VirtualBox
  2. \n
  3. Click on the docker vm (mine was automatically named \"default\")
  4. \n
  5. Open the settings by clicking settings
  6. \n
  7. In the network settings open the port forwarding dialog
  8. \n
  9. Click the + symbol to add another port and connect a port from your mac to the VM by filling in the dialog as shown below. In this example I chose port 8810 because I run other notebooks using port 8888.\n\"enter
  10. \n
  11. then open a browser and connect to http://localhost:8810 (or whichever port you set in the host port section
  12. \n
  13. Make your fancy pants machine learning app!
  14. \n
\n", "system": "" }, { "instruction": "ImportError: Failed to import any qt binding, Python - Tensorflow", "input": "", "output": "

make sure you have PyQt5 installed. you may open a python shell and try:

\n\n
import PyQt5\n
\n\n

if it fails then you can install it via:

\n\n
pip install PyQt5\n
\n\n

If you are on macOS or Linux be careful that you might need to run

\n\n
pip3 install PyQt5\n
\n", "system": "" }, { "instruction": "What is the difference between CuDNNLSTM and LSTM in Keras?", "input": "", "output": "

Why don't you try it out for yourself and see?\nIn my case, training a model with LSTM took 10mins 30seconds.\nSimply switching the call from LSTM() to CuDNNLSTM() took less than a minute.

\n\n

I also noticed that switching to CuDNNLSTM() speeds up model.evaluate() and model.predict() substantially as well.

\n", "system": "" }, { "instruction": "Tensorflow: How to replace or modify gradient?", "input": "", "output": "

For TensorFlow 1.7 and TensorFlow 2.0 look at edit blow.

\n\n
\n\n

First define your custom gradient:

\n\n
@tf.RegisterGradient(\"CustomGrad\")\ndef _const_mul_grad(unused_op, grad):\n  return 5.0 * grad\n
\n\n

Since you want nothing to happen in the forward pass, override the gradient of an identity operation with your new gradient:

\n\n
g = tf.get_default_graph()\nwith g.gradient_override_map({\"Identity\": \"CustomGrad\"}):\n  output = tf.identity(input, name=\"Identity\")\n
\n\n

Here is a working example with a layer that clips gradients in the backwards pass and does nothing in the forwards pass, using the same method:

\n\n
import tensorflow as tf\n\n@tf.RegisterGradient(\"CustomClipGrad\")\ndef _clip_grad(unused_op, grad):\n  return tf.clip_by_value(grad, -0.1, 0.1)\n\ninput = tf.Variable([3.0], dtype=tf.float32)\n\ng = tf.get_default_graph()\nwith g.gradient_override_map({\"Identity\": \"CustomClipGrad\"}):\n  output_clip = tf.identity(input, name=\"Identity\")\ngrad_clip = tf.gradients(output_clip, input)\n\n# output without gradient clipping in the backwards pass for comparison:\noutput = tf.identity(input)\ngrad = tf.gradients(output, input)\n\nwith tf.Session() as sess:\n  sess.run(tf.global_variables_initializer())\n  print(\"with clipping:\", sess.run(grad_clip)[0])\n  print(\"without clipping:\", sess.run(grad)[0])\n
\n\n
\n\n

Edit for TensorFlow 1.7 and TensorFlow 2.0

\n\n

Since 1.7 there is a new way to redefine the gradient with shorter syntax, which also works with Tensorflow 2.0. It also allows to redefine the gradient of multiple operations at the same time. Here are the examples from above, rewritten for TensorFlow 1.7 and TensorFlow 2.0:

\n\n

Layer that scales gradients in the backward pass:

\n\n
@tf.custom_gradient\ndef scale_grad_layer(x):\n  def grad(dy):\n    return 5.0 * dy\n  return tf.identity(x), grad\n
\n\n

Example with a layer that clips gradients in the backward pass:

\n\n
@tf.custom_gradient\ndef clip_grad_layer(x):\n  def grad(dy):\n    return tf.clip_by_value(dy, -0.1, 0.1)\n  return tf.identity(x), grad\n
\n", "system": "" }, { "instruction": "Custom loss function in Keras", "input": "", "output": "

All you have to do is define a function for that, using keras backend functions for calculations. The function must take the true values and the model predicted values.

\n

Now, since I'm not sure about what are g, q, x an y in your function, I'll just create a basic example here without caring about what it means or whether it's an actual useful function:

\n
import keras.backend as K\n\ndef customLoss(yTrue,yPred):\n    return K.sum(K.log(yTrue) - K.log(yPred))\n    \n
\n

All backend functions can be seen here.

\n

After that, compile your model using that function instead of a regular one:

\n
model.compile(loss=customLoss, optimizer = .....)\n
\n", "system": "" }, { "instruction": "Tensorflow crashes with CUBLAS_STATUS_ALLOC_FAILED", "input": "", "output": "

For TensorFlow 2.2 none of the other answers worked when the CUBLAS_STATUS_ALLOC_FAILED problem was encountered. Found a solution on https://www.tensorflow.org/guide/gpu:

\n
import tensorflow as tf\ngpus = tf.config.experimental.list_physical_devices('GPU')\nif gpus:\n    try:\n        # Currently, memory growth needs to be the same across GPUs\n        for gpu in gpus:\n            tf.config.experimental.set_memory_growth(gpu, True)\n        logical_gpus = tf.config.experimental.list_logical_devices('GPU')\n        print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")\n    except RuntimeError as e:\n        # Memory growth must be set before GPUs have been initialized\n        print(e)\n
\n

I ran this code before any further calculations are made and found that the same code that produced CUBLAS error before now worked in same session. The sample code above is a specific example that sets the memory growth across a number of physical GPUs but it also solves the memory expansion problem.

\n", "system": "" }, { "instruction": "CBOW v.s. skip-gram: why invert context and target words?", "input": "", "output": "

Here is my oversimplified and rather naive understanding of the difference:

\n

As we know, CBOW is learning to predict the word by the context. Or maximize the probability of the target word by looking at the context. And this happens to be a problem for rare words. For example, given the context yesterday was a really [...] day CBOW model will tell you that most probably the word is beautiful or nice. Words like delightful will get much less attention of the model, because it is designed to predict the most probable word. This word will be smoothed over a lot of examples with more frequent words.

\n

On the other hand, the skip-gram model is designed to predict the context. Given the word delightful it must understand it and tell us that there is a huge probability that the context is yesterday was really [...] day, or some other relevant context. With skip-gram the word delightful will not try to compete with the word beautiful but instead, delightful+context pairs will be treated as new observations.

\n

UPDATE

\n

Thanks to @0xF for sharing this article

\n
\n

According to Mikolov

\n

Skip-gram: works well with small amount of the training data, represents well even rare words or phrases.

\n

CBOW: several times faster to train than the skip-gram, slightly better accuracy for the frequent words

\n
\n

One more addition to the subject is found here:

\n
\n

In the "skip-gram" mode alternative to "CBOW", rather than averaging\nthe context words, each is used as a pairwise training example. That\nis, in place of one CBOW example such as [predict 'ate' from\naverage('The', 'cat', 'the', 'mouse')], the network is presented with\nfour skip-gram examples [predict 'ate' from 'The'], [predict 'ate'\nfrom 'cat'], [predict 'ate' from 'the'], [predict 'ate' from 'mouse'].\n(The same random window-reduction occurs, so half the time that would\njust be two examples, of the nearest words.)

\n
\n", "system": "" }, { "instruction": "TensorFlow: Dst tensor is not initialized", "input": "", "output": "

For brevity, this error message is generated when there is not enough memory to handle the batch size.

\n\n

Expanding on Steven's link (I cannot post comments yet), here are a few tricks to monitor/control memory usage in Tensorflow:

\n\n\n", "system": "" }, { "instruction": "Error when checking target: expected dense_3 to have shape (3,) but got array with shape (1,)", "input": "", "output": "

The problem is with your label-data shape. In a multiclass problem you are predicting the probabibility of every possible class, so must provide label data in (N, m) shape, where N is the number of training examples, and m is the number of possible classes (3 in your case).

\n\n

Keras expects y-data in (N, 3) shape, not (N,) as you've problably provided, that's why it raises an error.

\n\n

Use e.g. OneHotEncoder to convert your label data to one-hot encoded form.

\n", "system": "" }, { "instruction": "What is the meaning of the "None" in model.summary of KERAS?", "input": "", "output": "

None means this dimension is variable.

\n\n

The first dimension in a keras model is always the batch size. You don't need fixed batch sizes, unless in very specific cases (for instance, when working with stateful=True LSTM layers).

\n\n

That's why this dimension is often ignored when you define your model. For instance, when you define input_shape=(100,200), actually you're ignoring the batch size and defining the shape of \"each sample\". Internally the shape will be (None, 100, 200), allowing a variable batch size, each sample in the batch having the shape (100,200).

\n\n

The batch size will be then automatically defined in the fit or predict methods.

\n\n
\n\n

Other None dimensions:

\n\n

Not only the batch dimension can be None, but many others as well.

\n\n

For instance, in a 2D convolutional network, where the expected input is (batchSize, height, width, channels), you can have shapes like (None, None, None, 3), allowing variable image sizes.

\n\n

In recurrent networks and in 1D convolutions, you can also make the length/timesteps dimension variable, with shapes like (None, None, featuresOrChannels)

\n", "system": "" }, { "instruction": "How do I get the weights of a layer in Keras?", "input": "", "output": "

If you want to get weights and biases of all layers, you can simply use:

\n\n
for layer in model.layers: print(layer.get_config(), layer.get_weights())\n
\n\n

This will print all information that's relevant.

\n\n

If you want the weights directly returned as numpy arrays, you can use:

\n\n
first_layer_weights = model.layers[0].get_weights()[0]\nfirst_layer_biases  = model.layers[0].get_weights()[1]\nsecond_layer_weights = model.layers[1].get_weights()[0]\nsecond_layer_biases  = model.layers[1].get_weights()[1]\n
\n\n

etc.

\n", "system": "" }, { "instruction": "What does tf.gfile do in TensorFlow?", "input": "", "output": "

For anyone landing here, the following answer was provided (by a googler) on: Why use tensorflow gfile? (for file I/O)

\n\n
\n

The main roles of the tf.gfile module are:

\n \n
    \n
  1. To provide an API that is close to Python's file objects, and

  2. \n
  3. To provide an implementation based on TensorFlow's C++ FileSystem API.

  4. \n
\n \n

The C++ FileSystem API supports multiple file system implementations,\n including local files, Google Cloud Storage (using a gs:// prefix),\n and HDFS (using an hdfs:// prefix). TensorFlow exports these as\n tf.gfile, so that you can use these implementations for saving and\n loading checkpoints, writing TensorBoard logs, and accessing training\n data (among other uses). However, if all of your files are local, you\n can use the regular Python file API without any problem.

\n
\n", "system": "" }, { "instruction": "What is difference between tf.truncated_normal and tf.random_normal?", "input": "", "output": "

The documentation says it all:\nFor truncated normal distribution:

\n
\n

The values are drawn from a normal distribution with specified mean and standard deviation, discarding and re-drawing any samples that are more than two standard deviations from the mean.

\n
\n

Most probably it is easy to understand the difference by plotting the graph for yourself (%magic is because I use jupyter notebook):

\n
import tensorflow as tf\nimport matplotlib.pyplot as plt\n\n%matplotlib inline  \n\nn = 500000\nA = tf.truncated_normal((n,))\nB = tf.random_normal((n,))\nwith tf.Session() as sess:\n    a, b = sess.run([A, B])\n
\n

And now

\n
plt.hist(a, 100, (-4.2, 4.2));\nplt.hist(b, 100, (-4.2, 4.2));\n
\n

\"enter

\n
\n

The point for using truncated normal is to overcome saturation of tome functions like sigmoid (where if the value is too big/small, the neuron stops learning).

\n", "system": "" }, { "instruction": "Why do we need TensorFlow tf.Graph?", "input": "", "output": "

TL;DR: It's unnecessary, but it's a good practice to follow.

\n\n

Since a default graph is always registered, every op and variable is placed into the default graph.\nThe statement, however, creates a new graph and places everything (declared inside its scope) into this graph.\nIf the graph is the only graph, it's useless. But it's a good practice because if you start to work with many graphs it's easier to understand where ops and vars are placed.\nSince this statement costs you nothing, it's better to write it anyway. Just to be sure that if you refactor the code in the future, the operations defined belong to the graph you choose initially.

\n", "system": "" }, { "instruction": "CUDA_ERROR_OUT_OF_MEMORY in tensorflow", "input": "", "output": "

In case it's still relevant for someone, I encountered this issue when trying to run Keras/Tensorflow for the second time, after a first run was aborted. It seems the GPU memory is still allocated, and therefore cannot be allocated again. It was solved by manually ending all python processes that use the GPU, or alternatively, closing the existing terminal and running again in a new terminal window.

\n", "system": "" }, { "instruction": "In tensorflow what is the difference between tf.add and operator (+)?", "input": "", "output": "

There's no difference in precision between a+b and tf.add(a, b). The former translates to a.__add__(b) which gets mapped to tf.add by means of following line in math_ops.py

\n\n

_OverrideBinaryOperatorHelper(gen_math_ops.add, \"add\")

\n\n

The only difference is that node name in the underlying Graph is add instead of Add. You can generally compare things by looking at the underlying Graph representation like this

\n\n
tf.reset_default_graph()\ndtype = tf.int32\na = tf.placeholder(dtype)\nb = tf.placeholder(dtype)\nc = a+b\nprint(tf.get_default_graph().as_graph_def())\n
\n\n

You could also see this directly by inspecting the __add__ method. There's an extra level of indirection because it's a closure, but you can get the underlying function as follows

\n\n
real_function = tf.Tensor.__add__.im_func.func_closure[0].cell_contents\nprint(real_function.__module__ + \".\" + real_function.__name__)\nprint(tf.add.__module__ + \".\" + tf.add.__name__)\n
\n\n

And you'll see output below which means that they call same underlying function

\n\n
tensorflow.python.ops.gen_math_ops.add\ntensorflow.python.ops.gen_math_ops.add\n
\n\n

You can see from tf.Tensor.OVERLOADABLE_OPERATORS that following Python special methods are potentially overloaded by appropriate TensorFlow versions

\n\n
{'__abs__',\n '__add__',\n '__and__',\n '__div__',\n '__floordiv__',\n '__ge__',\n '__getitem__',\n '__gt__',\n '__invert__',\n '__le__',\n '__lt__',\n '__mod__',\n '__mul__',\n '__neg__',\n '__or__',\n '__pow__',\n '__radd__',\n '__rand__',\n '__rdiv__',\n '__rfloordiv__',\n '__rmod__',\n '__rmul__',\n '__ror__',\n '__rpow__',\n '__rsub__',\n '__rtruediv__',\n '__rxor__',\n '__sub__',\n '__truediv__',\n '__xor__'}\n
\n\n

Those methods are described in Python reference 3.3.7: emulating numeric types. Note that Python data model does not provide a way to overload assignment operator = so assignment always uses native Python implementation.

\n", "system": "" }, { "instruction": "How does reduce_sum() work in tensorflow?", "input": "", "output": "

x has a shape of (2, 3) (two rows and three columns):

\n\n
1 1 1\n1 1 1\n
\n\n

By doing tf.reduce_sum(x, 0) the tensor is reduced along the first dimension (rows), so the result is [1, 1, 1] + [1, 1, 1] = [2, 2, 2].

\n\n

By doing tf.reduce_sum(x, 1) the tensor is reduced along the second dimension (columns), so the result is [1, 1] + [1, 1] + [1, 1] = [3, 3].

\n\n

By doing tf.reduce_sum(x, [0, 1]) the tensor is reduced along BOTH dimensions (rows and columns), so the result is 1 + 1 + 1 + 1 + 1 + 1 = 6 or, equivalently, [1, 1, 1] + [1, 1, 1] = [2, 2, 2], and then 2 + 2 + 2 = 6 (reduce along rows, then reduce the resulted array).

\n", "system": "" }, { "instruction": "TypeError: 'Tensor' object does not support item assignment in TensorFlow", "input": "", "output": "

In general, a TensorFlow tensor object is not assignable, so you cannot use it on the left-hand side of an assignment.

\n

The easiest way to do what you're trying to do is to build a Python list of tensors, and tf.stack() them together at the end of the loop:

\n
outputs, states = rnn.rnn(lstm_cell, x, initial_state=initial_state,\n                          sequence_length=real_length)\n\noutput_list = []\n\ntensor_shape = outputs.get_shape()\nfor step_index in range(tensor_shape[0]):\n    word_index = self.x[:, step_index]\n    word_index = tf.reshape(word_index, [-1,1])\n    index_weight = tf.gather(word_weight, word_index)\n    output_list.append(tf.mul(outputs[step_index, :, :] , index_weight))\n\noutputs = tf.stack(output_list)\n
\n
\n

\u00a0* With the exception of tf.Variable objects, using the Variable.assign() etc. methods. However, rnn.rnn() likely returns a tf.Tensor object that does not support this method.

\n", "system": "" }, { "instruction": "TensorFlow: training on my own image", "input": "", "output": "

If you are interested in how to input your own data in TensorFlow, you can look at this tutorial.
\nI've also written a guide with best practices for CS230 at Stanford here.

\n\n
\n\n

New answer (with tf.data) and with labels

\n\n

With the introduction of tf.data in r1.4, we can create a batch of images without placeholders and without queues. The steps are the following:

\n\n
    \n
  1. Create a list containing the filenames of the images and a corresponding list of labels
  2. \n
  3. Create a tf.data.Dataset reading these filenames and labels
  4. \n
  5. Preprocess the data
  6. \n
  7. Create an iterator from the tf.data.Dataset which will yield the next batch
  8. \n
\n\n

The code is:

\n\n
# step 1\nfilenames = tf.constant(['im_01.jpg', 'im_02.jpg', 'im_03.jpg', 'im_04.jpg'])\nlabels = tf.constant([0, 1, 0, 1])\n\n# step 2: create a dataset returning slices of `filenames`\ndataset = tf.data.Dataset.from_tensor_slices((filenames, labels))\n\n# step 3: parse every image in the dataset using `map`\ndef _parse_function(filename, label):\n    image_string = tf.read_file(filename)\n    image_decoded = tf.image.decode_jpeg(image_string, channels=3)\n    image = tf.cast(image_decoded, tf.float32)\n    return image, label\n\ndataset = dataset.map(_parse_function)\ndataset = dataset.batch(2)\n\n# step 4: create iterator and final input tensor\niterator = dataset.make_one_shot_iterator()\nimages, labels = iterator.get_next()\n
\n\n

Now we can run directly sess.run([images, labels]) without feeding any data through placeholders.

\n\n
\n\n

Old answer (with TensorFlow queues)

\n\n

To sum it up you have multiple steps:

\n\n
    \n
  1. Create a list of filenames (ex: the paths to your images)
  2. \n
  3. Create a TensorFlow filename queue
  4. \n
  5. Read and decode each image, resize them to a fixed size (necessary for batching)
  6. \n
  7. Output a batch of these images
  8. \n
\n\n
\n\n

The simplest code would be:\n

\n\n
# step 1\nfilenames = ['im_01.jpg', 'im_02.jpg', 'im_03.jpg', 'im_04.jpg']\n\n# step 2\nfilename_queue = tf.train.string_input_producer(filenames)\n\n# step 3: read, decode and resize images\nreader = tf.WholeFileReader()\nfilename, content = reader.read(filename_queue)\nimage = tf.image.decode_jpeg(content, channels=3)\nimage = tf.cast(image, tf.float32)\nresized_image = tf.image.resize_images(image, [224, 224])\n\n# step 4: Batching\nimage_batch = tf.train.batch([resized_image], batch_size=8)\n
\n", "system": "" }, { "instruction": "Best way to flatten a 2D tensor containing a vector in TensorFlow?", "input": "", "output": "

Both tf.reshape(w, [-1]) and tf.squeeze(w) are \"cheap\" in that they operate only on the metadata (i.e. the shape) of the given tensor, and don't modify the data itself. Of the two tf.reshape() has slightly simpler logic internally, but the performance of the two should be indistinguishable.

\n", "system": "" }, { "instruction": "Tensorflow: Using Adam optimizer", "input": "", "output": "

The AdamOptimizer class creates additional variables, called \"slots\", to hold values for the \"m\" and \"v\" accumulators.

\n\n

See the source here if you're curious, it's actually quite readable:\nhttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/adam.py#L39 . Other optimizers, such as Momentum and Adagrad use slots too.

\n\n

These variables must be initialized before you can train a model.

\n\n

The normal way to initialize variables is to call tf.initialize_all_variables() which adds ops to initialize the variables present in the graph when it is called.

\n\n

(Aside: unlike its name suggests, initialize_all_variables() does not initialize anything, it only add ops that will initialize the variables when run.)

\n\n

What you must do is call initialize_all_variables() after you have added the optimizer:

\n\n
...build your model...\n# Add the optimizer\ntrain_op = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)\n# Add the ops to initialize variables.  These will include \n# the optimizer slots added by AdamOptimizer().\ninit_op = tf.initialize_all_variables()\n\n# launch the graph in a session\nsess = tf.Session()\n# Actually intialize the variables\nsess.run(init_op)\n# now train your model\nfor ...:\n  sess.run(train_op)\n
\n", "system": "" }, { "instruction": "How to use stop_gradient in Tensorflow", "input": "", "output": "

tf.stop_gradient provides a way to not compute gradient with respect to some variables during back-propagation.

\n

For example, in the code below, we have three variables, w1, w2, w3 and input x. The loss is square((x1.dot(w1) - x.dot(w2 * w3))). We want to minimize this loss wrt to w1 but want to keep w2 and w3 fixed. To achieve this we can just put tf.stop_gradient(tf.matmul(x, w2*w3)).

\n

In the figure below, I plotted how w1, w2, and w3 from their initial values as the function of training iterations. It can be seen that w2 and w3 remain fixed while w1 changes until it becomes equal to w2 * w3.

\n

An image showing that w1 only learns but not w2 and w3:

\n

\"An

\n
import tensorflow as tf\nimport numpy as np\n\nw1 = tf.get_variable("w1", shape=[5, 1], initializer=tf.truncated_normal_initializer())\nw2 = tf.get_variable("w2", shape=[5, 1], initializer=tf.truncated_normal_initializer())\nw3 = tf.get_variable("w3", shape=[5, 1], initializer=tf.truncated_normal_initializer())\nx = tf.placeholder(tf.float32, shape=[None, 5], name="x")\n\n\na1 = tf.matmul(x, w1)\na2 = tf.matmul(x, w2*w3)\na2 = tf.stop_gradient(a2)\nloss = tf.reduce_mean(tf.square(a1 - a2))\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)\ngradients = optimizer.compute_gradients(loss)\ntrain_op = optimizer.apply_gradients(gradients)\n
\n", "system": "" }, { "instruction": "AttributeError: module 'numpy' has no attribute 'typeDict'", "input": "", "output": "

I was trying to use the package pyensembl and ran into this same issue. I was able to work around it for now with

\n
pip install numpy==1.21\n
\n

Which should suffice until some of these less active packages are able to update to the new API.

\n", "system": "" }, { "instruction": "cannot import name '_registerMatType' from 'cv2.cv2'", "input": "", "output": "

The same thing occurred to me yesterday when I used Colab. A possible reason may be that the version of opencv-python(4.1.2.30) does not match opencv-python-headless(4.5.5.62). Or the latest version 4.5.5 may have something wrong...

\n

I uninstalled opencv-python-headless==4.5.5.62 and installed 4.1.2.30 and it fixed.

\n", "system": "" }, { "instruction": "Could not load dynamic library 'libnvinfer.so.6'", "input": "", "output": "

This is a warning, not an error. You can still use TensorFlow. The shared libraries libnvinfer and libnvinfer_plugin are optional and required only if you are using nvidia's TensorRT capabilities.

\n

To suppress this and all other warnings, set the environment variable TF_CPP_MIN_LOG_LEVEL="2".

\n", "system": "" }, { "instruction": "How to get allocated GPU spec in Google Colab", "input": "", "output": "

Since you can run bash command in colab, just run !nvidia-smi:\n\"enter

\n", "system": "" }, { "instruction": "Tensorboard not found as magic function in jupyter", "input": "", "output": "

UPDATE

\n

For newer TF versions (tensorflow>=1.14.0 & tensorflow != 2.0.0a0 - newer than TF2.0-alpha) load the extension like this

\n
%load_ext tensorboard\n
\n

OLD ANSWER

\n

The extension needs to be loaded first:

\n
%load_ext tensorboard.notebook\n%tensorboard --logdir {logs_base_dir}\n
\n", "system": "" }, { "instruction": "Tensorflow Custom TFLite java.lang.NullPointerException: Cannot allocate memory for the interpreter", "input": "", "output": "

This is due to the computer you are using, make sure you have enough ram to allocate for the program, or try increasing your swap file size?

\n", "system": "" }, { "instruction": "what is the difference between Flatten() and GlobalAveragePooling2D() in keras", "input": "", "output": "

That both seem to work doesn't mean they do the same.

\n\n

Flatten will take a tensor of any shape and transform it into a one dimensional tensor (plus the samples dimension) but keeping all values in the tensor. For example a tensor (samples, 10, 20, 1) will be flattened to (samples, 10 * 20 * 1).

\n\n

GlobalAveragePooling2D does something different. It applies average pooling on the spatial dimensions until each spatial dimension is one, and leaves other dimensions unchanged. In this case values are not kept as they are averaged. For example a tensor (samples, 10, 20, 1) would be output as (samples, 1, 1, 1), assuming the 2nd and 3rd dimensions were spatial (channels last).

\n", "system": "" }, { "instruction": "TensorFlow: "Attempting to use uninitialized value" in variable initialization", "input": "", "output": "

Run this:

\n\n
init = tf.global_variables_initializer()\nsess.run(init)\n
\n\n

Or (depending on the version of TF that you have):

\n\n
init = tf.initialize_all_variables()\nsess.run(init)\n
\n", "system": "" }, { "instruction": "Cannot dlopen some GPU libraries. Skipping registering GPU devices", "input": "", "output": "

Before you do anything more drastic, maybe you just need to set environment variables CUDNN_PATH and/or LD_LIBRARY_PATH.

\n

Check with:

\n
echo $CUDNN_PATH       # should exist and give a good path\nls $CUDNN_PATH         # should contain stuff like a lib subdir with libcudnn .so files\necho $LD_LIBRARY_PATH  # should exist and contain CUDNN_PATH\n
\n

If you need changes, I set them with:

\n
export CUDNN_PATH=$(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))\nexport LD_LIBRARY_PATH=${CUDNN_PATH}/lib\n
\n

But there are other ways, like @User_Rebo's:

\n
export CUDNN_PATH="$HOME/.local/lib/python3.10/site-packages/nvidia/cudnn"\nexport LD_LIBRARY_PATH="$CUDNN_PATH/lib":"/usr/local/cuda/lib64"\n
\n

Some people set these in their .bashrc; that's an easy way to not forget, but I personally prefer to set them in each session I want to use them.

\n", "system": "" }, { "instruction": "How to replace (or insert) intermediate layer in Keras model?", "input": "", "output": "

The following function allows you to insert a new layer before, after or to replace each layer in the original model whose name matches a regular expression, including non-sequential models such as DenseNet or ResNet.

\n\n
import re\nfrom keras.models import Model\n\ndef insert_layer_nonseq(model, layer_regex, insert_layer_factory,\n                        insert_layer_name=None, position='after'):\n\n    # Auxiliary dictionary to describe the network graph\n    network_dict = {'input_layers_of': {}, 'new_output_tensor_of': {}}\n\n    # Set the input layers of each layer\n    for layer in model.layers:\n        for node in layer._outbound_nodes:\n            layer_name = node.outbound_layer.name\n            if layer_name not in network_dict['input_layers_of']:\n                network_dict['input_layers_of'].update(\n                        {layer_name: [layer.name]})\n            else:\n                network_dict['input_layers_of'][layer_name].append(layer.name)\n\n    # Set the output tensor of the input layer\n    network_dict['new_output_tensor_of'].update(\n            {model.layers[0].name: model.input})\n\n    # Iterate over all layers after the input\n    model_outputs = []\n    for layer in model.layers[1:]:\n\n        # Determine input tensors\n        layer_input = [network_dict['new_output_tensor_of'][layer_aux] \n                for layer_aux in network_dict['input_layers_of'][layer.name]]\n        if len(layer_input) == 1:\n            layer_input = layer_input[0]\n\n        # Insert layer if name matches the regular expression\n        if re.match(layer_regex, layer.name):\n            if position == 'replace':\n                x = layer_input\n            elif position == 'after':\n                x = layer(layer_input)\n            elif position == 'before':\n                pass\n            else:\n                raise ValueError('position must be: before, after or replace')\n\n            new_layer = insert_layer_factory()\n            if insert_layer_name:\n                new_layer.name = insert_layer_name\n            else:\n                new_layer.name = '{}_{}'.format(layer.name, \n                                                new_layer.name)\n            x = new_layer(x)\n            print('New layer: {} Old layer: {} Type: {}'.format(new_layer.name,\n                                                            layer.name, position))\n            if position == 'before':\n                x = layer(x)\n        else:\n            x = layer(layer_input)\n\n        # Set new output tensor (the original one, or the one of the inserted\n        # layer)\n        network_dict['new_output_tensor_of'].update({layer.name: x})\n\n        # Save tensor in output list if it is output in initial model\n        if layer_name in model.output_names:\n            model_outputs.append(x)\n\n    return Model(inputs=model.inputs, outputs=model_outputs)\n\n
\n\n

The difference with respect to the simpler case of a purely sequential model is that before iterating over the layers to find the key layer, you first parse the graph and store the input layers of each layer in an auxiliary dictionary. Then, as you iterate over the layers, you also store the new output tensor of each layer, which is used to determine the input layers of each layer, when building the new model.

\n\n

A use case would be the following, where a Dropout layer is inserted after each activation layer of ResNet50:

\n\n
from keras.applications.resnet50 import ResNet50\nfrom keras.models import load_model\n\nmodel = ResNet50()\ndef dropout_layer_factory():\n    return Dropout(rate=0.2, name='dropout')\nmodel = insert_layer_nonseq(model, '.*activation.*', dropout_layer_factory)\n\n# Fix possible problems with new model\nmodel.save('temp.h5')\nmodel = load_model('temp.h5')\n\nmodel.summary()\n
\n", "system": "" }, { "instruction": "ImportError: libcublas.so.9.0: cannot open shared object file", "input": "", "output": "

You will need to update your LD_LIBRARY_PATH, so that it points to the /usr/local/cuda-9.0/lib64.\nAdd the following line to your .bashrc file (or any other terminal you use)

\n\n
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-9.0/lib64/\n
\n", "system": "" }, { "instruction": "Tensorflow not running on GPU", "input": "", "output": "

To check which devices are available to TensorFlow you can use this and see if the GPU cards are available:

\n
from tensorflow.python.client import device_lib\nprint(device_lib.list_local_devices())\n
\n
\n

More info

\n

There are also C++ logs available controlled by the TF_CPP_MIN_VLOG_LEVEL env variable, e.g.:

\n
import os\nos.environ["TF_CPP_MIN_VLOG_LEVEL"] = "2"\n
\n

should allow them to be printed when running import tensorflow as tf.

\n

You should see this kind of logs if you use GPU-enabled tensorflow with proper access to the GPU machine:

\n
successfully opened CUDA library libcublas.so.*.* locally\nsuccessfully opened CUDA library libcudnn.so.*.*  locally\nsuccessfully opened CUDA library libcufft.so.*.*  locally\n
\n

On the other hand, if there are no CUDA libraries in the system / container, you will see:

\n
Could not find cuda drivers on your machine, GPU will not be used.\n
\n

and where CUDA are installed, but there is no GPU physically available, TF will import cleanly and error only later, when you run device_lib.list_local_devices() with this:

\n
failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected\n\n
\n", "system": "" }, { "instruction": "What is the mathematics behind the "smoothing" parameter in TensorBoard's scalar graphs?", "input": "", "output": "

ORIGINAL ANSWER

\n

It is called exponential moving average, below is a code explanation how it is created.

\n

Assuming all the real scalar values are in a list called scalars the smoothing is applied as follows:

\n
def smooth(scalars: List[float], weight: float) -> List[float]:  # Weight between 0 and 1\n    last = scalars[0]  # First value in the plot (first timestep)\n    smoothed = list()\n    for point in scalars:\n        smoothed_val = last * weight + (1 - weight) * point  # Calculate smoothed value\n        smoothed.append(smoothed_val)                        # Save it\n        last = smoothed_val                                  # Anchor the last smoothed value\n        \n    return smoothed\n
\n

UPDATED ANSWER

\n

As @SaPropper correctly pointed out, TensorBoard now includes the debiasing factor.

\n", "system": "" }, { "instruction": "Convert between NHWC and NCHW in TensorFlow", "input": "", "output": "

All you need to do is a permutation of the dimensions from NHWC to NCHW (or the contrary).

\n

The meaning of each letter might help understand:

\n\n
\n

From NHWC to NCHW

\n

The image shape is (N, H, W, C) and we want the output to have shape (N, C, H, W). Therefore we need to apply tf.transpose with a well chosen permutation perm.

\n
\n

The returned tensor's dimension i will correspond to the input dimension perm[i]

\n
\n
perm[0] = 0  # output dimension 0 will be 'N', which was dimension 0 in the input\nperm[1] = 3  # output dimension 1 will be 'C', which was dimension 3 in the input\nperm[2] = 1  # output dimension 2 will be 'H', which was dimension 1 in the input\nperm[3] = 2  # output dimension 3 will be 'W', which was dimension 2 in the input\n
\n

In practice:

\n
images_nhwc = tf.placeholder(tf.float32, [None, 200, 300, 3])  # input batch\nout = tf.transpose(images_nhwc, [0, 3, 1, 2])\nprint(out.get_shape())  # the shape of out is [None, 3, 200, 300]\n
\n
\n

From NCHW to NHWC

\n

The image shape is (N, C, H, W) and we want the output to have shape (N, H, W, C). Therefore we need to apply tf.transpose with a well chosen permutation perm.

\n
\n

The returned tensor's dimension i will correspond to the input dimension perm[i]

\n
\n
perm[0] = 0  # output dimension 0 will be 'N', which was dimension 0 in the input\nperm[1] = 2  # output dimension 1 will be 'H', which was dimension 2 in the input\nperm[2] = 3  # output dimension 2 will be 'W', which was dimension 3 in the input\nperm[3] = 1  # output dimension 3 will be 'C', which was dimension 1 in the input\n
\n

In practice:

\n
images_nchw = tf.placeholder(tf.float32, [None, 3, 200, 300])  # input batch\nout = tf.transpose(images_nchw, [0, 2, 3, 1])\nprint(out.get_shape())  # the shape of out is [None, 200, 300, 3]\n
\n", "system": "" }, { "instruction": "Tensorflow: How to get a tensor by name?", "input": "", "output": "

There is a function tf.Graph.get_tensor_by_name(). For instance:

\n\n
import tensorflow as tf\n\nc = tf.constant([[1.0, 2.0], [3.0, 4.0]])\nd = tf.constant([[1.0, 1.0], [0.0, 1.0]])\ne = tf.matmul(c, d, name='example')\n\nwith tf.Session() as sess:\n    test =  sess.run(e)\n    print e.name #example:0\n    test = tf.get_default_graph().get_tensor_by_name(\"example:0\")\n    print test #Tensor(\"example:0\", shape=(2, 2), dtype=float32)\n
\n", "system": "" }, { "instruction": "How to install TensorFlow on Windows?", "input": "", "output": "
\n

How to install TensorFlow and to use it under Windows?

\n
\n\n

Updated on 8/4/16

\n\n

Windows 10 now has a Ubuntu Bash environment, AKA Bash on Ubuntu on Windows, available as a standard option (as opposed to Insider Preview updates for developers). (StackOverflow tag wsl) This option came with the Windows 10 anniversary update (Version 1607) released on 8/2/2016. This allows the use of apt-get to install software packages such as Python and TensorFlow.

\n\n

Note: Bash on Ubuntu on Windows does not have access to the GPU, so all of the GPU options for installing TensorFlow will not work.

\n\n

The dated installation instructions for Bash on Ubuntu on Windows are basically correct, but only these steps are necessary:
\nPrerequisites
\nEnable the Windows Subsystem for Linux feature (GUI)
\nReboot when prompted
\nRun Bash on Windows

\n\n

Steps no longer needed:
\nTurn on Developer Mode
\nEnable the Windows Subsystem for Linux feature (command-line)

\n\n

Then install TensorFlow using apt-get

\n\n
sudo apt-get install python3-pip python3-dev\nsudo pip3 install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0-cp34-cp34m-linux_x86_64.whl \n
\n\n

and now test TensorFlow

\n\n
$ python3\n...\n>>> import tensorflow as tf\n>>> hello = tf.constant('Hello, TensorFlow!')\n>>> sess = tf.Session()\n>>> print(sess.run(hello))\nHello, TensorFlow!\n>>> a = tf.constant(10)\n>>> b = tf.constant(32)\n>>> print(sess.run(a + b))\n42\n>>> exit()\n
\n\n

and run an actual neural network

\n\n
python3 -m tensorflow.models.image.mnist.convolutional\n
\n\n

Earlier Answer

\n\n

After learning about the developer preview of Bash on Windows.

\n\n

See Playing with TensorFlow on Windows by Scott Hanselman which uses Bash on Windows 10

\n\n

Original Answer

\n\n

Bazel is the problem

\n\n

TensorFlow is not made with build automation tools such as make, but with Google's in-house build tool Bazel. Bazel only works on systems based on Unix such as Linux and OS X.

\n\n

Since the current published/known means to build TensorFlow uses Bazel and Bazel does not work on Windows, one can not install or run TensorFlow natively on Windows.

\n\n

From Bazel FAQ

\n\n
\n

What about Windows?

\n \n

Due to its UNIX heritage, porting Bazel to Windows is significant\n work. For example, Bazel uses symlinks extensively, which has varying\n levels of support across Windows versions.

\n \n

We are currently actively working on improving Windows support, but\n it's still ways from being usable.

\n
\n\n

Status

\n\n

See: TensorFlow issue #17
\nSee: Bazel issue #276

\n\n

Solutions

\n\n

The solutions are listed in the order of complexity and work needed; from about an hour to may not even work.

\n\n
    \n
  1. Docker
    \n~ 1 hour
  2. \n
\n\n

Docker installation

\n\n

Docker is a system to build self contained versions of a Linux operating system running on your machine. When you install and run TensorFlow via Docker it completely isolates the installation from pre-existing packages on your machine.

\n\n

Also look at TensorFlow - which Docker image to use?

\n\n
    \n
  1. OS X
    \n~ 1 hour
  2. \n
\n\n

If you have a current Mac running OS X then see: Installation for Mac OS X

\n\n
    \n
  1. Linux
  2. \n
\n\n

The recommend Linux system tends to be Ubuntu 14.04 LTS (Download page).

\n\n

a. Virtual Machine - Hardware Virtualization - Full Virtualization
\n ~ 3 hours

\n\n

Download and install a virtual machine such as the commercial VMware or the free Virtual Box, after which you can install Linux and then install TensorFlow.

\n\n

When you go to install TensorFlow you will be using Pip - Python's package management system. Visual Studio users should think NuGet. The packages are known as wheels.

\n\n

See: Pip Installation

\n\n

If you need to build from the source then see: Installing From Sources
\n~ 4 hours

\n\n

Note: If you plan on using a Virtual Machine and have never done so before, consider using the Docker option instead, since Docker is the Virtual Machine, OS and TensorFlow all packaged together.

\n\n

b. Dual boot
\n ~ 3 hours

\n\n

If you want to run TensorFlow on the same machine that you have Windows and make use of the GPU version then you will most likely have to use this option as running on a hosted Virtual Machine, Type 2 hypervisor, will not allow you access to the GPU.

\n\n
    \n
  1. Remote machine
    \n~ 4 hours
  2. \n
\n\n

If you have remote access to another machine that you can install the Linux OS and TensorFlow software on and allow remote connections to, then you can use your Windows machine to present the remote machine as an application running on Windows.

\n\n
    \n
  1. Cloud Service
    \nI have no experience with this. Please edit answer if you know.
  2. \n
\n\n

Cloud services such as AWS are being used.

\n\n

From TensorFlow Features

\n\n
\n

Want to run the model as a service in the cloud?\n Containerize with Docker and TensorFlow just works.

\n
\n\n

From Docker

\n\n
\n

Running Docker on AWS provides a highly reliable, low-cost way to\n quickly build, ship, and run distributed applications at scale. Deploy\n Docker using AMIs from the AWS Marketplace.

\n
\n\n
    \n
  1. Wait for Bazel to work on Windows.
  2. \n
\n\n

Currently it appears the only hold up is Bazel, however Bazel's roadmap list working on Windows should be available this year.

\n\n

There are two features listed for Windows:

\n\n
2016\u201102  Bazel can bootstrap itself on Windows without requiring admin privileges.  \n\n2016\u201112  Full Windows support for Android: Android feature set is identical for Windows and Linux/OS X.\n
\n\n
    \n
  1. Build TensorFlow by hand.
    \nA few days or more depending on you skill level. I gave up on this one; too many subprojects to build and files to locate.
  2. \n
\n\n

Remember that Bazel is only used to build TensorFlow. If you get the commands Bazel runs and the correct source code and libraries you should be able to build TensorFlow on Windows. See: How do I get the commands executed by Bazel.

\n\n

While I have not researched this more, you can look at the continuous integration info for needed files and info on how to they build it for testing. (Readme) (site)

\n\n
    \n
  1. Build Bazel on Windows
    \nA few days or more depending on you skill level. I gave up on this one also; could not find the necessary source files needed for Windows.
  2. \n
\n\n

There is a public experimental source code version of Bazel that boots on Windows. You may be able to leverage this into getting Bazel to work on Windows, etc.

\n\n

Also these solutions require the use of Cygwin or MinGW which adds another layer of complexity.

\n\n
    \n
  1. Use alternative build system such as Make
    \nIf you get this one to work I would like to see in on GitHub.
  2. \n
\n\n

This currently does not exist for TensorFlow. It is a feature request.

\n\n

See: TensorFlow issue 380

\n\n
    \n
  1. Cross Build
    \nIf you get this one to work I would like to see in on GitHub.
  2. \n
\n\n

You build TensorFlow on Linux using Bazel but change the build process to output a wheel that can be installed on Windows. This will require detailed knowledge of Bazel to change the configuration, and locating the source code and libraries that work with Windows. An option I would only suggest as a last resort. It may not even be possible.

\n\n
    \n
  1. Run on the new Windows Subsystem for Linux.
  2. \n
\n\n

See: Windows Subsystem for Linux Overview

\n\n

You will know as much as I do by reading the referenced article.

\n\n
\n

Can I use Bazel for Windows for production use?

\n
\n\n

Since it is experimental software I would not use on a production machine.

\n\n

Remember that you only need Bazel to build TensorFlow. So use the experimental code on a non production machine to build the wheel, then install the wheel on a production machine. See: Pip Installation

\n\n

TLDR;

\n\n

Currently I have several versions for learning. Most use a VMWare 7.1 Workstation to host Ubuntu 14.04 LTS or Ubuntu 15 or Debian. I also have one dual boot of Ubuntu 14.04 LTS on my Windows machine to access the GPU as the machine with VMware does not have the proper GPU. I would recommend that you give these machines at least 8G of memory either as RAM or RAM and swap space as I have run out of memory a few times.

\n", "system": "" }, { "instruction": "FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated", "input": "", "output": "

This might or might not be your case, but the same warning is also spit out from h5py package:

\n\n
\n

/home/user/bin/conda3/lib/python3.6/site-packages/h5py/__init__.py:34:\n FutureWarning: Conversion of the second argument of issubdtype from\n float to np.floating is deprecated. In future, it will be treated\n as np.float64 == np.dtype(float).type. from ._conv import\n register_converters as _register_converters

\n
\n\n

For anyone coming here with this problem, it is a known h5py issue, introduced with numpy 1.14. As stated by the devs:

\n\n
\n

You can ignore the warning, it's not going to cause any issues at the\n moment, but you should upgrade to the next release of h5py when it\n becomes available.

\n
\n\n

... so it's harmless. The fix has just been merged to master. But until the update is released, the workaround is to downgrade numpy to a previous version:

\n\n
pip install numpy==1.13.0\n
\n\n

Update: h5py has released the RC build with the fix. The following command should do it:

\n\n
pip install h5py==2.8.0rc1\n
\n\n

Update (FINAL): there's a full-fledged release now. So you can simply run:

\n\n
pip install --upgrade h5py\n
\n", "system": "" }, { "instruction": "How to downgrade tensorflow, multiple versions possible?", "input": "", "output": "

Pip allows to specify the version

\n\n

pip install tensorflow==1.1

\n", "system": "" }, { "instruction": "Prevent TensorFlow from accessing the GPU?", "input": "", "output": "

Have a look to this question or this answer.

\n\n

To summarise you can add this piece of code:

\n\n
import os\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"-1\"\nimport tensorflow as tf\n
\n\n

Playing with the CUDA_VISIBLE_DEVICES environment variable is one of if not the way to go whenever you have GPU-tensorflow installed and you don't want to use any GPUs.

\n\n
\n

You to want either export CUDA_VISIBLE_DEVICES= or alternatively use a virtualenv with a non-GPU installation of TensorFlow.

\n
\n", "system": "" }, { "instruction": "No module named tensorflow in jupyter", "input": "", "output": "

If you installed a TensorFlow as it said in official documentation: https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html#overview

\n\n

I mean creating an environment called tensorflow and tested your installation in python, but TensorFlow can not be imported in jupyter, you have to install jupyter in your tensorflow environment too:

\n\n
conda install jupyter notebook\n
\n\n

After that I run a jupyter and it can import TensorFlow too:

\n\n
jupyter notebook\n
\n", "system": "" }, { "instruction": "How to get the dimensions of a tensor (in TensorFlow) at graph construction time?", "input": "", "output": "

I see most people confused about tf.shape(tensor) and tensor.get_shape()\nLet's make it clear:

\n\n
    \n
  1. tf.shape
  2. \n
\n\n

tf.shape is used for dynamic shape. If your tensor's shape is changable, use it. \nAn example: a input is an image with changable width and height, we want resize it to half of its size, then we can write something like:
\nnew_height = tf.shape(image)[0] / 2

\n\n
    \n
  1. tensor.get_shape
  2. \n
\n\n

tensor.get_shape is used for fixed shapes, which means the tensor's shape can be deduced in the graph.

\n\n

Conclusion:\ntf.shape can be used almost anywhere, but t.get_shape only for shapes can be deduced from graph.

\n", "system": "" }, { "instruction": "How to convert numpy arrays to standard TensorFlow format?", "input": "", "output": "

You can use tf.convert_to_tensor():

\n\n
import tensorflow as tf\nimport numpy as np\n\ndata = [[1,2,3],[4,5,6]]\ndata_np = np.asarray(data, np.float32)\n\ndata_tf = tf.convert_to_tensor(data_np, np.float32)\n\nsess = tf.InteractiveSession()  \nprint(data_tf.eval())\n\nsess.close()\n
\n\n

Here's a link to the documentation for this method:

\n\n

https://www.tensorflow.org/api_docs/python/tf/convert_to_tensor

\n", "system": "" }, { "instruction": "How to write a custom loss function in Tensorflow?", "input": "", "output": "

We need to write down the loss function. For example, we can use basic mean square error as our loss function for predicted y and target y_:

\n
 loss_mse = 1/n(Sum((y-y_)^2))\n
\n

There are basic functions for tensors like tf.add(x,y), tf.sub(x,y), tf.square(x), tf.reduce_sum(x), etc.

\n

Then we can define our loss function in Tensorflow like:

\n
cost = tf.reduce_mean(tf.square(tf.sub(y,y_)))\n
\n

Note: y and y_ are tensors.

\n

Moreover, we can define any other loss functions if we can write down the equations. For some training operators (minimizers), the loss function should satisfy some conditions (smooth, differentiable ...).

\n

In one word, Tensorflow define arrays, constants, variables into tensors, define calculations using tf functions, and use session to run though graph. We can define whatever we like and run it in the end.

\n", "system": "" }, { "instruction": "Count number of "True" values in boolean Tensor", "input": "", "output": "

You can cast the values to floats and compute the sum on them:\ntf.reduce_sum(tf.cast(myOtherTensor, tf.float32))

\n\n

Depending on your actual use case you can also compute sums per row/column if you specify the reduce dimensions of the call.

\n", "system": "" }, { "instruction": "module 'tensorflow' has no attribute 'logging'", "input": "", "output": "

tf.logging was for Logging and Summary Operations and in TF 2.0 it has been removed in favor of the open-source absl-py, and to make the main tf.* namespace has functions that will be used more often.

\n\n

In TF.2 lesser used functions are gone or moved into sub-packages like tf.math

\n\n

So instead of tf.logging you could:

\n\n\n", "system": "" }, { "instruction": "Can't save custom subclassed model", "input": "", "output": "

TensorFlow 2.2

\n\n

Thanks for @cal for noticing me that the new TensorFlow has supported saving the custom models!

\n\n
\n

By using model.save to save the whole model and by using load_model to restore previously stored subclassed model. The following code snippets describe how to implement them.

\n
\n\n
class ThreeLayerMLP(keras.Model):\n\n  def __init__(self, name=None):\n    super(ThreeLayerMLP, self).__init__(name=name)\n    self.dense_1 = layers.Dense(64, activation='relu', name='dense_1')\n    self.dense_2 = layers.Dense(64, activation='relu', name='dense_2')\n    self.pred_layer = layers.Dense(10, name='predictions')\n\n  def call(self, inputs):\n    x = self.dense_1(inputs)\n    x = self.dense_2(x)\n    return self.pred_layer(x)\n\ndef get_model():\n  return ThreeLayerMLP(name='3_layer_mlp')\n\nmodel = get_model()\n# Save the model\nmodel.save('path_to_my_model',save_format='tf')\n\n# Recreate the exact same model purely from the file\nnew_model = keras.models.load_model('path_to_my_model')\n
\n\n

See: Save and serialize models with Keras - Part II: Saving and Loading of Subclassed Models

\n\n

TensorFlow 2.0

\n\n

TL;DR:

\n\n
    \n
  1. do not use model.save() for custom subclass keras model;
  2. \n
  3. use save_weights() and load_weights() instead.
  4. \n
\n\n
\n\n

With the help of the Tensorflow Team, it turns out the best practice of saving a Custom Sub-Class Keras Model is to save its weights and load it back when needed.

\n\n

The reason that we can not simply save a Keras custom subclass model is that it contains custom codes, which can not be serialized safely. However, the weights can be saved/loaded when we have the same model structure and custom codes without any problem.

\n\n

There has a great tutorial written by Francois Chollet who is the author of Keras, for how to save/load Sequential/Functional/Keras/Custom Sub-Class Models in Tensorflow 2.0 in Colab at here. In Saving Subclassed Models section, it said that:

\n\n
\n

Sequential models and Functional models are datastructures that represent a DAG of layers. As such, they can be safely serialized and deserialized.

\n \n

A subclassed model differs in that it's not a datastructure, it's a\n piece of code. The architecture of the model is defined via the body\n of the call method. This means that the architecture of the model\n cannot be safely serialized. To load a model, you'll need to have\n access to the code that created it (the code of the model subclass).\n Alternatively, you could be serializing this code as bytecode (e.g.\n via pickling), but that's unsafe and generally not portable.

\n
\n", "system": "" }, { "instruction": "Does ImageDataGenerator add more images to my dataset?", "input": "", "output": "

Short answer: 1) All the original images are just transformed (i.e. rotation, zooming, etc.) every epoch and then used for training, and 2) [Therefore] the number of images in each epoch is equal to the number of original images you have.

\n

Long answer: In each epoch, the ImageDataGenerator applies a transformation on the images you have and use the transformed images for training. The set of transformations includes rotation, zooming, etc. By doing this you're somehow creating new data (i.e. also called data augmentation), but obviously the generated images are not totally different from the original ones. This way the learned model may be more robust and accurate as it is trained on different variations of the same image.

\n

You need to set the steps_per_epoch argument of fit method to n_samples / batch_size, where n_samples is the total number of training data you have (i.e. 1000 in your case). This way in each epoch, each training sample is augmented only one time and therefore 1000 transformed images will be generated in each epoch.

\n

Further, I think it's worth clarifying the meaning of "augmentation" in this context: basically we are augmenting the images when we use ImageDataGenerator and enabling its augmentation capabilities. But the word "augmentation" here does not mean, say, if we have 100 original training images we end up having 1000 images per epoch after augmentation (i.e. the number of training images does not increase per epoch). Instead, it means we use a different transformation of each image in each epoch; hence, if we train our model for, say, 5 epochs, we have used 5 different versions of each original image in training (or 100 * 5 = 500 different images in the whole training, instead of using just the 100 original images in the whole training). To put it differently, the total number of unique images increases in the whole training from start to finish, and not per epoch.

\n", "system": "" }, { "instruction": "Make a custom loss function in keras", "input": "", "output": "

There are two steps in implementing a parameterized custom loss function in Keras. First, writing a method for the coefficient/metric. Second, writing a wrapper function to format things the way Keras needs them to be.

\n\n
    \n
  1. It's actually quite a bit cleaner to use the Keras backend instead of tensorflow directly for simple custom loss functions like DICE. Here's an example of the coefficient implemented that way:

    \n\n
    import keras.backend as K\ndef dice_coef(y_true, y_pred, smooth, thresh):\n    y_pred = y_pred > thresh\n    y_true_f = K.flatten(y_true)\n    y_pred_f = K.flatten(y_pred)\n    intersection = K.sum(y_true_f * y_pred_f)\n\n    return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)\n
  2. \n
  3. Now for the tricky part. Keras loss functions must only take (y_true, y_pred) as parameters. So we need a separate function that returns another function.

    \n\n
    def dice_loss(smooth, thresh):\n  def dice(y_true, y_pred)\n    return -dice_coef(y_true, y_pred, smooth, thresh)\n  return dice\n
  4. \n
\n\n

Finally, you can use it as follows in Keras compile.

\n\n
# build model \nmodel = my_model()\n# get the loss function\nmodel_dice = dice_loss(smooth=1e-5, thresh=0.5)\n# compile model\nmodel.compile(loss=model_dice)\n
\n", "system": "" }, { "instruction": "Multiple outputs in Keras", "input": "", "output": "
from keras.models import Model\nfrom keras.layers import *    \n\n#inp is a \"tensor\", that can be passed when calling other layers to produce an output \ninp = Input((10,)) #supposing you have ten numeric values as input \n\n\n#here, SomeLayer() is defining a layer, \n#and calling it with (inp) produces the output tensor x\nx = SomeLayer(blablabla)(inp) \nx = SomeOtherLayer(blablabla)(x) #here, I just replace x, because this intermediate output is not interesting to keep\n\n\n#here, I want to keep the two different outputs for defining the model\n#notice that both left and right are called with the same input x, creating a fork\nout1 = LeftSideLastLayer(balbalba)(x)    \nout2 = RightSideLastLayer(banblabala)(x)\n\n\n#here, you define which path you will follow in the graph you've drawn with layers\n#notice the two outputs passed in a list, telling the model I want it to have two outputs.\nmodel = Model(inp, [out1,out2])\nmodel.compile(optimizer = ...., loss = ....) #loss can be one for both sides or a list with different loss functions for out1 and out2    \n\nmodel.fit(inputData,[outputYLeft, outputYRight], epochs=..., batch_size=...)\n
\n", "system": "" }, { "instruction": "Keras - stateful vs stateless LSTMs", "input": "", "output": "

I recommend you to firstly learn the concepts of BPTT (Back Propagation Through Time) and mini-batch SGD(Stochastic Gradient Descent), then you'll have further understandings of LSTM's training procedure.

\n\n

For your questions,

\n\n

Q1. In stateless cases, LSTM updates parameters on batch1 and then, initiate hidden states and cell states (usually all zeros) for batch2, while in stateful cases, it uses batch1's last output hidden states and cell sates as initial states for batch2.

\n\n

Q2. As you can see above, when two sequences in two batches have connections (e.g. prices of one stock), you'd better use stateful mode, else (e.g. one sequence represents a complete sentence) you should use stateless mode.

\n\n

BTW, @vu.pham said if we use stateful RNN, then in production, the network is forced to deal with infinite long sequences. This seems not correct, actually, as you can see in Q1, LSTM WON'T learn on the whole sequence, it first learns sequence in batch1, updates parameters, and then learn sequence on batch2.

\n", "system": "" }, { "instruction": "Tensorflow - matmul of input matrix with batch data", "input": "", "output": "

Previous answers are obsolete. Currently tf.matmul() support tensors with rank > 2:

\n\n
\n

The inputs must be matrices (or tensors of rank > 2, representing\n batches of matrices), with matching inner dimensions, possibly after\n transposition.

\n
\n\n

Also tf.batch_matmul() was removed and tf.matmul() is the right way to do batch multiplication. The main idea can be understood from the following code:

\n\n
import tensorflow as tf\nbatch_size, n, m, k = 10, 3, 5, 2\nA = tf.Variable(tf.random_normal(shape=(batch_size, n, m)))\nB = tf.Variable(tf.random_normal(shape=(batch_size, m, k)))\ntf.matmul(A, B)\n
\n\n

Now you will receive a tensor of the shape (batch_size, n, k). Here is what is going on here. Assume you have batch_size of matrices nxm and batch_size of matrices mxk. Now for each pair of them you calculate nxm X mxk which gives you an nxk matrix. You will have batch_size of them.

\n\n

Notice that something like this is also valid:

\n\n
A = tf.Variable(tf.random_normal(shape=(a, b, n, m)))\nB = tf.Variable(tf.random_normal(shape=(a, b, m, k)))\ntf.matmul(A, B)\n
\n\n

and will give you a shape (a, b, n, k)

\n", "system": "" }, { "instruction": "How do I swap tensor's axes in TensorFlow?", "input": "", "output": "

tf.transpose provides the same functionality as np.swapaxes, although in a more generalized form. In your case, you can do tf.transpose(orig_tensor, [1, 0, 2]) which would be equivalent to np.swapaxes(orig_np_array, 0, 1).

\n", "system": "" }, { "instruction": "How do I get the gradient of the loss at a TensorFlow variable?", "input": "", "output": "

The tf.gradients() function allows you to compute the symbolic gradient of one tensor with respect to one or more other tensors—including variables. Consider the following simple example:

\n\n
data = tf.placeholder(tf.float32)\nvar = tf.Variable(...)              # Must be a tf.float32 or tf.float64 variable.\nloss = some_function_of(var, data)  # some_function_of() returns a `Tensor`.\n\nvar_grad = tf.gradients(loss, [var])[0]\n
\n\n

You can then use this symbolic gradient to evaluate the gradient in some specific point (data):

\n\n
sess = tf.Session()\n\nvar_grad_val = sess.run(var_grad, feed_dict={data: ...})\n
\n", "system": "" }, { "instruction": "In TensorFlow is there any way to just initialize uninitialised variables?", "input": "", "output": "

There is no elegant* way to enumerate the uninitialized variables in a graph. However, if you have access to the new variable objects—let's call them v_6, v_7, and v_8—you can selectively initialize them using tf.initialize_variables():

\n\n
init_new_vars_op = tf.initialize_variables([v_6, v_7, v_8])\nsess.run(init_new_vars_op)\n
\n\n
\n\n

* A process of trial and error could be used to identify the uninitialized variables, as follows:

\n\n
uninitialized_vars = []\nfor var in tf.all_variables():\n    try:\n        sess.run(var)\n    except tf.errors.FailedPreconditionError:\n        uninitialized_vars.append(var)\n\ninit_new_vars_op = tf.initialize_variables(uninitialized_vars)\n# ...\n
\n\n

...however, I would not condone such behavior :-).

\n", "system": "" }, { "instruction": ""zsh: illegal hardware instruction python" when installing Tensorflow on macbook pro M1", "input": "", "output": "

This worked for me after trying a bunch of solutions to no avail.

\n

Step 1 Using pyenv install python version 3.8.5 and set it as your default python version. This tutorial(https://realpython.com/intro-to-pyenv/) is helpful for\ngetting pyenv configured properly.

\n

Step 1.1 Use this post(https://github.com/pyenv/pyenv/issues/1446) if you have troubles running pyenv in zsh.

\n

Step 1.2 Once you have python version 3.8.5 running which you can check by running python -V which should output:

\n
Python 3.8.5\n
\n

Step 2 Install virtualenv via pip install virtualenv

\n

Step 2.1 Create a virtual environment by running virtualenv ENV

\n

Step 2.2 Activate that virtual environment by running source ENV/bin/activate

\n

Step 3 Install the tensorflow wheel called tensorflow-2.4.1-py3-none-any.whl located at this public google drive link https://drive.google.com/drive/folders/1oSipZLnoeQB0Awz8U68KYeCPsULy_dQ7

\n

Step 3.1 Assuming you simply installed the wheel to downloads run\npip install ~/Downloads/tensorflow-2.4.1-py3-none-any.whl in your activated virtual environment

\n

Step 4 Type python which will bring up >>>in your terminal and type

\n
>>> import tensorflow\n>>>\n
\n

If there is no 'zsh illegal hardware instruction" error you should be good to go.

\n

Note: If you are using anaconda, the above will also work. You can skip the virtual env steps (assuming you have a virtual env activated through Conda) and just go straight to the pip install as mentioned above (steps 3 and later).

\n", "system": "" }, { "instruction": "Xcode version must be specified to use an Apple CROSSTOOL", "input": "", "output": "
bazel clean --expunge \nsudo xcode-select -s /Applications/Xcode.app/Contents/Developer\nsudo xcodebuild -license\nbazel clean --expunge \nbazel build --config=opt //tensorflow/tools/pip_package:build_pip_package\n
\n", "system": "" }, { "instruction": "could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR", "input": "", "output": "

In Tensorflow 2.0, my issue was resolved by setting the memory growth. ConfigProto is deprecated in TF 2.0, I used tf.config.experimental. My computer specs are:

\n\n\n\n

The code I used was:

\n\n
physical_devices = tf.config.experimental.list_physical_devices('GPU')\nassert len(physical_devices) > 0, \"Not enough GPU hardware devices available\"\nconfig = tf.config.experimental.set_memory_growth(physical_devices[0], True)\n
\n", "system": "" }, { "instruction": "Keras + Tensorflow and Multiprocessing in Python", "input": "", "output": "

From my experience - the problem lies in loading Keras to one process and then spawning a new process when the keras has been loaded to your main environment. But for some applications (like e.g. training a mixture of Kerasmodels) it's simply better to have all of this things in one process. So what I advise is the following (a little bit cumbersome - but working for me) approach:

\n\n
    \n
  1. DO NOT LOAD KERAS TO YOUR MAIN ENVIRONMENT. If you want to load Keras / Theano / TensorFlow do it only in the function environment. E.g. don't do this:

    \n\n
    import keras\n\ndef training_function(...):\n    ...\n
    \n\n

    but do the following:

    \n\n
    def training_function(...):\n    import keras\n    ...\n
  2. \n
  3. Run work connected with each model in a separate process: I'm usually creating workers which are making the job (like e.g. training, tuning, scoring) and I'm running them in separate processes. What is nice about it that whole memory used by this process is completely freed when your process is done. This helps you with loads of memory problems which you usually come across when you are using multiprocessing or even running multiple models in one process. So this looks e.g. like this:

    \n\n
    def _training_worker(train_params):\n    import keras\n    model = obtain_model(train_params)\n    model.fit(train_params)\n    send_message_to_main_process(...)\n\ndef train_new_model(train_params):\n    training_process = multiprocessing.Process(target=_training_worker, args = train_params)\n    training_process.start()\n    get_message_from_training_process(...)\n    training_process.join()\n
  4. \n
\n\n

Different approach is simply preparing different scripts for different model actions. But this may cause memory errors especially when your models are memory consuming. NOTE that due to this reason it's better to make your execution strictly sequential.

\n", "system": "" }, { "instruction": "Using sparse matrices with Keras and Tensorflow", "input": "", "output": "

Sorry, don't have the reputation to comment, but I think you should take a look at the answer here: Keras, sparse matrix issue. I have tried it and it works correctly, just one note though, at least in my case, the shuffling led to really bad results, so I used this slightly modified non-shuffled alternative:

\n\n
def nn_batch_generator(X_data, y_data, batch_size):\n    samples_per_epoch = X_data.shape[0]\n    number_of_batches = samples_per_epoch/batch_size\n    counter=0\n    index = np.arange(np.shape(y_data)[0])\n    while 1:\n        index_batch = index[batch_size*counter:batch_size*(counter+1)]\n        X_batch = X_data[index_batch,:].todense()\n        y_batch = y_data[index_batch]\n        counter += 1\n        yield np.array(X_batch),y_batch\n        if (counter > number_of_batches):\n            counter=0\n
\n\n

It produces comparable accuracies to the ones achieved by keras's shuffled implementation (setting shuffle=True in fit).

\n", "system": "" }, { "instruction": "Is there an example on how to generate protobuf files holding trained TensorFlow graphs", "input": "", "output": "

EDIT: The freeze_graph.py script, which is part of the TensorFlow repository, now serves as a tool that generates a protocol buffer representing a \"frozen\" trained model, from an existing TensorFlow GraphDef and a saved checkpoint. It uses the same steps as described below, but it much easier to use.

\n\n
\n\n

Currently the process isn't very well documented (and subject to refinement), but the approximate steps are as follows:

\n\n
    \n
  1. Build and train your model as a tf.Graph called g_1.
  2. \n
  3. Fetch the final values of each of the variables and store them as numpy arrays (using Session.run()).
  4. \n
  5. In a new tf.Graph called g_2, create tf.constant() tensors for each of the variables, using the value of the corresponding numpy array fetched in step 2.
  6. \n
  7. Use tf.import_graph_def() to copy nodes from g_1 into g_2, and use the input_map argument to replace each variable in g_1 with the corresponding tf.constant() tensors created in step 3. You may also want to use input_map to specify a new input tensor (e.g. replacing an input pipeline with a tf.placeholder()). Use the return_elements argument to specify the name of the predicted output tensor.

  8. \n
  9. Call g_2.as_graph_def() to get a protocol buffer representation of the graph.

  10. \n
\n\n

(NOTE: The generated graph will have extra nodes in the graph for training. Although it is not part of the public API, you may wish to use the internal graph_util.extract_sub_graph() function to strip these nodes from the graph.)

\n", "system": "" }, { "instruction": "what is XLA_GPU and XLA_CPU for tensorflow", "input": "", "output": "

As mentioned in the docs, XLA stands for \"accelerated linear algebra\". It's Tensorflow's relatively new optimizing compiler that can further speed up your ML models' GPU operations by combining what used to be multiple CUDA kernels into one (simplifying because this isn't that important for your question).

\n\n

To your question, my understanding is that XLA is separate enough from the default Tensorflow compiler that they separately register GPU devices and have slightly different constraints on which GPUs they treat as visible (see here for more on this). Looking at the output of the command you ran, it looks like XLA is registering 1 GPU and normal TF is registering 3.

\n\n

I'm not sure if you're having issues or are just curious, but if it's the former, I recommend taking a look at the issue I linked above and this one. Tensorflow's finicky about which CUDA/cuDNN versions with which it works flawlessly and it's possible you're using incompatible versions. (If you're not having issues, then hopefully the first part of my answer is sufficient.)

\n", "system": "" }, { "instruction": "ValueError: Can not squeeze dim[1], expected a dimension of 1, got 3 for 'sparse_softmax_cross_entropy_loss", "input": "", "output": "

The error here is from tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits).

\n\n

The TensorFlow documentation clearly states that \"labels vector must provide a single specific index for the true class for each row of logits\". So your labels vector must include only class-indices like 0,1,2 and not their respective one-hot-encodings like [1,0,0], [0,1,0], [0,0,1].

\n\n

Reproducing the error to explain further:

\n\n
import numpy as np\nimport tensorflow as tf\n\n# Create random-array and assign as logits tensor\nnp.random.seed(12345)\nlogits = tf.convert_to_tensor(np.random.sample((4,4)))\nprint logits.get_shape() #[4,4]\n\n# Create random-labels (Assuming only 4 classes)\nlabels = tf.convert_to_tensor(np.array([2, 2, 0, 1]))\n\nloss_1 = tf.losses.sparse_softmax_cross_entropy(labels, logits)\n\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())\n\nprint 'Loss: {}'.format(sess.run(loss_1)) # 1.44836854\n\n# Now giving one-hot-encodings in place of class-indices for labels\nwrong_labels = tf.convert_to_tensor(np.array([[0,0,1,0], [0,0,1,0], [1,0,0,0],[0,1,0,0]]))\nloss_2 = tf.losses.sparse_softmax_cross_entropy(wrong_labels, logits)\n\n# This should give you a similar error as soon as you define it\n
\n\n

So try giving class-indices instead of one-hot encodings in your Y_Labels vector.\nHope this clears your doubt.

\n", "system": "" }, { "instruction": "How to understand loss acc val_loss val_acc in Keras model fitting", "input": "", "output": "

Answering your questions:

\n
    \n
  1. As described on official keras FAQ
  2. \n
\n
\n

the training loss is the average of the losses over each batch of training data. Because your model is changing over time, the loss over the first batches of an epoch is generally higher than over the last batches. On the other hand, the testing loss for an epoch is computed using the model as it is at the end of the epoch, resulting in a lower loss.

\n
\n
    \n
  1. Training should be stopped when val_acc stops increasing, otherwise your model will probably overffit. You can use earlystopping callback to stop training.

    \n
  2. \n
  3. Your model seems to achieve very good results. Keep up the good work.

    \n
  4. \n
\n", "system": "" }, { "instruction": "Tensorflow serving No versions of servable <MODEL> found under base path", "input": "", "output": "

I had same problem, the reason is because object detection api does not assign version of your model when exporting your detection model. However, tensorflow serving requires you to assign a version number of your detection model, so that you could choose different versions of your models to serve. In your case, you should put your detection model(.pb file and variables folder) under folder:\n/serving/ssd_frozen/1/. In this way, you will assign your model to version 1, and tensorflow serving will automatically load this version since you only have one version. By default tensorflow serving will automatically serve the latest version(ie, the largest number of versions).

\n\n

Note, after you created 1/ folder, the model_base_path is still need to be set to --model_base_path=/serving/ssd_frozen/.

\n", "system": "" }, { "instruction": "How to interpret increase in both loss and accuracy", "input": "", "output": "

The loss decreases as the training process goes on, except for some fluctuation introduced by the mini-batch gradient descent and/or regularization techniques like dropout (that introduces random noise).

\n\n

If the loss decreases, the training process is going well.

\n\n

The (validation I suppose) accuracy, instead, it's a measure of how good the predictions of your model are.

\n\n

If the model is learning, the accuracy increases. If the model is overfitting, instead, the accuracy stops to increase and can even start to decrease.

\n\n

If the loss decreases and the accuracy decreases, your model is overfitting.

\n\n

If the loss increases and the accuracy increase too is because your regularization techniques are working well and you're fighting the overfitting problem. This is true only if the loss, then, starts to decrease whilst the accuracy continues to increase.\nOtherwise, if the loss keep growing your model is diverging and you should look for the cause (usually you're using a too high learning rate value).

\n", "system": "" }, { "instruction": "TensorFlow ValueError: Cannot feed value of shape (64, 64, 3) for Tensor u'Placeholder:0', which has shape '(?, 64, 64, 3)'", "input": "", "output": "

image has a shape of (64,64,3).

\n

Your input placeholder _x have a shape of (?,64,64,3).

\n

The problem is that you're feeding the placeholder with a value of a different shape.

\n

You have to feed it with a value of (1,64,64,3) = a batch of 1 image.

\n

Just reshape your image value to a batch with size one.

\n
image = array(img).reshape(1,64,64,3)\n
\n
\n

P.S: The fact that the input placeholder accepts a batch of images, means that you can run predicions for a batch of images in parallel.\nYou can try to read more than 1 image (N images) and then build a batch of N images, using a tensor with shape (N,64,64,3)

\n", "system": "" }, { "instruction": "Get CUDA_HOME environment path PYTORCH", "input": "", "output": "

I had a similar problem when I needed NVCC for flash attention, but it seems that torch uses a reduced version of CUDA libraries. Installing the toolkit from conda forge resolved issue for me

\n
conda install -c conda-forge cudatoolkit-dev -y\n
\n", "system": "" }, { "instruction": "AttributeError: 'collections.OrderedDict' object has no attribute 'eval'", "input": "", "output": "

It is not a model file, instead, this is a state file. In a model file, the complete model is stored, whereas in a state file only the parameters are stored.
\nSo, your OrderedDict are just values for your model. You will need to create the model and then need to load these values into your model. So, the process will be something in form of

\n\n
import torch\nimport torch.nn as nn\n\nclass TempModel(nn.Module):\n    def __init__(self):\n        self.conv1 = nn.Conv2d(3, 5, (3, 3))\n    def forward(self, inp):\n        return self.conv1(inp)\n\nmodel = TempModel()\nmodel.load_state_dict(torch.load(file_path))\nmodel.eval()\n
\n\n

You'll need to define your model properly. The one given in the example above is just a dummy. If you construct your model yourself, you might need to update the keys of the saved dict file as mentioned here. The best course of action is to define your model in exactly the same way from when the state_dict was saved and then directly executing model.load_state_dict will work.

\n", "system": "" }, { "instruction": "KL Divergence for two probability distributions in PyTorch", "input": "", "output": "

Yes, PyTorch has a method named kl_div under torch.nn.functional to directly compute KL-devergence between tensors. Suppose you have tensor a and b of same shape. You can use the following code:

\n\n
import torch.nn.functional as F\nout = F.kl_div(a, b)\n
\n\n

For more details, see the above method documentation.

\n", "system": "" }, { "instruction": "What is volatile variable in Pytorch", "input": "", "output": "

Basically, set the input to a network to volatile if you are doing inference only and won't be running backpropagation in order to conserve memory.

\n\n

From the docs:

\n\n
\n

Volatile is recommended for purely inference mode, when you\u2019re sure\n you won\u2019t be even calling .backward(). It\u2019s more efficient than any\n other autograd setting - it will use the absolute minimal amount of\n memory to evaluate the model. volatile also determines that\n requires_grad is False.

\n
\n\n

Edit: The volatile keyword has been deprecated as of pytorch version 0.4.0

\n", "system": "" }, { "instruction": "How do you alter the size of a Pytorch Dataset?", "input": "", "output": "

You can use torch.utils.data.Subset() e.g. for the first 10,000 elements:

\n
import torch.utils.data as data_utils\n\nindices = torch.arange(10000)\ntr_10k = data_utils.Subset(tr, indices)\n
\n", "system": "" }, { "instruction": "How to solve the famous `unhandled cuda error, NCCL version 2.7.8` error?", "input": "", "output": "

I had the right cuda installed meaning:

\n
python -c "import torch;print(torch.version.cuda)"\n\n#was equal to \n\nnvcc -V\n
\n

and

\n
ldconfig -v | grep "libnccl.so" | tail -n1 | sed -r 's/^.*\\.so\\.//' \n
\n

was giving out some version of nccl (e.g., 2.10.3 )

\n

The fix was to remove nccl:

\n
sudo apt remove libnccl2 libnccl-dev\n
\n

then the libnccl version check was not giving any version, but ddp training was working fine!

\n", "system": "" }, { "instruction": "PyTorch: What is the difference between tensor.cuda() and tensor.to(torch.device("cuda:0"))?", "input": "", "output": "

There is no difference between the two.
\nEarly versions of pytorch had .cuda() and .cpu() methods to move tensors and models from cpu to gpu and back. However, this made code writing a bit cumbersome:

\n
if cuda_available:\n  x = x.cuda()\n  model.cuda()\nelse:\n  x = x.cpu()\n  model.cpu()\n
\n

Later versions introduced .to() that basically takes care of everything in an elegant way:

\n
device = torch.device('cuda') if cuda_available else torch.device('cpu')\nx = x.to(device)\nmodel = model.to(device)\n
\n", "system": "" }, { "instruction": "Pytorch: IndexError: index out of range in self. How to solve?", "input": "", "output": "

I think you have messed up with input dimension declared torch.nn.Embedding and with your input. torch.nn.Embedding is a simple lookup table that stores embeddings of a fixed dictionary and size.

\n

Any input less than zero or more than or equal to the declared input dimension raises this error (In the given example having torch.tensor([10]), 10 is equal to input_dim).\nCompare your input and the dimension mentioned in torch.nn.Embedding.

\n

Attached code snippet to simulate the issue.

\n
from torch import nn\ninput_dim = 10\nembedding_dim = 2\nembedding = nn.Embedding(input_dim, embedding_dim)\nerr = True\nif err:\n    #Any input more than input_dim - 1, here input_dim = 10\n    #Any input less than zero\n    input_to_embed = torch.tensor([10])\nelse:\n    input_to_embed = torch.tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\nembed = embedding(input_to_embed)\nprint(embed)\n
\n

Hope this will solve your issue.

\n", "system": "" }, { "instruction": "How to modify path where Torch Hub models are downloaded", "input": "", "output": "

From official documentation, there is several ways to modify this path.
\nIn priority order :

\n
    \n
  1. Calling hub.set_dir(<PATH_TO_HUB_DIR>)

    \n
  2. \n
  3. $TORCH_HOME/hub, if environment variable TORCH_HOME is set.

    \n
  4. \n
  5. $XDG_CACHE_HOME/torch/hub, if environment variable XDG_CACHE_HOME is set.

    \n
  6. \n
  7. ~/.cache/torch/hub

    \n
  8. \n
\n

So I just had to do :

\n

export TORCH_HOME=/my/path/

\n
\n

Edit

\n

TORCH_HUB appear to be deprecated, use TORCH_HOME instead

\n", "system": "" }, { "instruction": "How to implement dropout in Pytorch, and where to apply it", "input": "", "output": "

The two examples you provided are exactly the same. self.drop_layer = nn.Dropout(p=p) and self.dropout = nn.Dropout(p) only differ because the authors assigned the layers to different variable names. The dropout layer is typically defined in the .__init__() method, and called in .forward(). Like this:

\n
 class NeuralNet(nn.Module):\n  def __init__(self, input_size, hidden_size, num_classes, p = dropout):\n      super(NeuralNet, self).__init__()\n      self.fc1 = nn.Linear(input_size, hidden_size)\n      self.fc2 = nn.Linear(hidden_size, hidden_size)\n      self.fc3 = nn.Linear(hidden_size, num_classes)\n      self.dropout = nn.Dropout(p) \n\n  def forward(self, x):\n      out = F.relu(self.fc1(x))\n      out = F.relu(self.fc2(out))\n      out = self.dropout(self.fc3(out))\n      return out\n
\n

You can do the test:

\n
import torch\nimport torch.nn  as nn\n\nm = nn.Dropout(p=0.5)\ninput = torch.randn(20, 16)\nprint(torch.sum(torch.nonzero(input)))\nprint(torch.sum(torch.nonzero(m(input))))\n
\n
tensor(5440) # sum of nonzero values\ntensor(2656) # sum on nonzero values after dropout\n
\n

Let's visualize it:

\n
import torch\nimport torch.nn as nn\ninput = torch.randn(5, 5)\nprint(input)\n
\n
tensor([[ 1.1404,  0.2102, -0.1237,  0.4240,  0.0174],\n        [-2.0872,  1.2790,  0.7804, -0.0962, -0.9730],\n        [ 0.4788, -1.3408,  0.0483,  2.4125, -1.2463],\n        [ 1.5761,  0.3592,  0.2302,  1.3980,  0.0154],\n        [-0.4308,  0.2484,  0.8584,  0.1689, -1.3607]])\n
\n

Now, let's apply the dropout:

\n
m = nn.Dropout(p=0.5)\noutput = m(input)\nprint(output)\n
\n
tensor([[ 0.0000,  0.0000, -0.0000,  0.8481,  0.0000],\n        [-0.0000,  0.0000,  1.5608, -0.0000, -1.9459],\n        [ 0.0000, -0.0000,  0.0000,  0.0000, -0.0000],\n        [ 0.0000,  0.7184,  0.4604,  2.7959,  0.0308],\n        [-0.0000,  0.0000,  0.0000,  0.0000, -0.0000]])\n
\n

Approximately half the neurons have been turned to zero, because we had probability p=0.5 that a neuron is set to zero!

\n", "system": "" }, { "instruction": "Filter data in pytorch tensor", "input": "", "output": "

Take a look at torch.nonzero which is roughly equivalent to np.where. It translates a binary mask to indices:

\n
>>> X = torch.tensor([0.1, 0.5, -1.0, 0, 1.2, 0])\n>>> mask = X >= 0\n>>> mask\ntensor([1, 1, 0, 1, 1, 1], dtype=torch.uint8)\n\n>>> indices = torch.nonzero(mask)\n>>> indices\ntensor([[0],\n        [1],\n        [3],\n        [4],\n        [5]])\n\n>>> X[indices]\ntensor([[0.1000],\n        [0.5000],\n        [0.0000],\n        [1.2000],\n        [0.0000]])\n
\n

A solution would then be to write:

\n
mask = X >= 0\nnew_tensor = X[mask]\nindices = torch.nonzero(mask)\n
\n", "system": "" }, { "instruction": "What are the difference between .bin and .pt pytorch saved model types?", "input": "", "output": "

There is no difference as it's just an extension. When it comes to UNIX-like OSes one can open the file no matter the extension (see here), Windows on the other hand is built with them in mind (here).

\n

torch can read either .bin or .pt or .anything so it's probably convention employed by the creators of that repository.

\n

Standard approach is to use .pt or .pth, though the second extension collides with Python's text file readable by interpreter, so .pt seems to be the best idea for now (see this github issue).

\n", "system": "" }, { "instruction": "How to convert a pytorch tensor of ints to a tensor of booleans?", "input": "", "output": "

What you're looking for is to generate a boolean mask for the given integer tensor. For this, you can simply check for the condition: "whether the values in the tensor are greater than 0" using simple comparison operator (>) or using torch.gt(), which would then give us the desired result.

\n
# input tensor\nIn [76]: t   \nOut[76]: tensor([ 0, 10,  0, 16])\n\n# generate the needed boolean mask\nIn [78]: t > 0      \nOut[78]: tensor([0, 1, 0, 1], dtype=torch.uint8)\n
\n
\n
# sanity check\nIn [93]: mask = t > 0      \n\nIn [94]: mask.type()      \nOut[94]: 'torch.ByteTensor'\n
\n
\n

Note: In PyTorch version 1.4+, the above operation would return 'torch.BoolTensor'

\n
In [9]: t > 0  \nOut[9]: tensor([False,  True, False,  True])\n\n# alternatively, use `torch.gt()` API\nIn [11]: torch.gt(t, 0)\nOut[11]: tensor([False,  True, False,  True])\n
\n

If you indeed want single bits (either 0s or 1s), cast it using:

\n
In [14]: (t > 0).type(torch.uint8)   \nOut[14]: tensor([0, 1, 0, 1], dtype=torch.uint8)\n\n# alternatively, use `torch.gt()` API\nIn [15]: torch.gt(t, 0).int()\nOut[15]: tensor([0, 1, 0, 1], dtype=torch.int32)\n
\n

The reason for this change has been discussed in this feature-request issue: issues/4764 - Introduce torch.BoolTensor ...

\n
\n

TL;DR: Simple one liner

\n
t.bool().int()\n
\n", "system": "" }, { "instruction": "How do you convert a .onnx to tflite?", "input": "", "output": "

I think the ONNX file i.e. model.onnx that you have given is corrupted I don't know what is the issue but it is not doing any inference on ONNX runtime.

\n
\n

Now you can run PyTorch Models directly on mobile phones. check out PyTorch Mobile's documentation here

\n
\n
\n

This answer is for TensorFlow version 1,
\nFor TensorFlow version 2 or higher click link

\n
\n

The best way to convert the model from protobuf freezeGraph to TFlite is to use the official TensorFlow lite converter documentation

\n

According to TensorFlow Docs, TocoConverter has been deprecated

\n
\n

This class (tf.compat.v1.lite.TocoConverter) has been deprecated. Please use lite.TFLiteConverter instead.

\n
\n

Convert from PyTorch to ONNX model

\n

The best practice to convert the model from Pytorch to Onnx is that you should add the following parameters to specify the names of the input and output layer of your model in torch.onnx.export() function

\n
\n# Export the model from PyTorch to ONNX\ntorch_out = torch.onnx._export(model,             # model being run\n                                x,          # model input (or a tuple for multiple inputs)\n                                EXPORT_PATH + "mnist.onnx",      # where to save the model (can be a file or file-like object)\n                                export_params=True,       # store the trained parameter weights inside the model file\n                                input_names=['main_input'],     # specify the name of input layer in onnx model\n                                output_names=['main_output'])     # specify the name of input layer in onnx model\n\n
\n

So in your case:\nNow export this model to TensorFlow protobuf FreezeGraph using onnx-tf

\n
\n

Please note that this method is only working when tensorflow_version < 2

\n
\n

Convert from ONNX to TensorFlow freezGraph

\n

To convert the model please install onnx-tf version 1.5.0 from the below command

\n
pip install  onnx-tf==1.5.0\n
\n

Now to convert .onnx model to TensorFlow freeze graph run this below command in shell

\n
onnx-tf convert -i "mnist.onnx" -o  "mnist.pb"\n
\n

Convert from TensorFlow FreezeGraph .pb to TF

\n

Now to convert this model from .pb file to tflite model use this code

\n
import tensorflow as tf\n# make a converter object from the saved tensorflow file\nconverter = tf.lite.TFLiteConverter.from_frozen_graph('mnist.pb', #TensorFlow freezegraph .pb model file\n                                                      input_arrays=['main_input'], # name of input arrays as defined in torch.onnx.export function before.\n                                                      output_arrays=['main_output']  # name of output arrays defined in torch.onnx.export function before.\n                                                      )\n# tell converter which type of optimization techniques to use\nconverter.optimizations = [tf.lite.Optimize.DEFAULT]\n# to view the best option for optimization read documentation of tflite about optimization\n# go to this link https://www.tensorflow.org/lite/guide/get_started#4_optimize_your_model_optional\n\n# convert the model \ntf_lite_model = converter.convert()\n# save the converted model \nopen('mnist.tflite', 'wb').write(tf_lite_model)\n
\n

To choose which option is best for optimization for your model use case see this official guide about TensorFlow lite optimization

\n

https://www.tensorflow.org/lite/guide/get_started#4_optimize_your_model_optional

\n
\n

Note: You can try my Jupyter Notebook Convert ONNX model to Tensorflow Lite on Google Colaboratory link

\n
\n", "system": "" }, { "instruction": "How do you load MNIST images into Pytorch DataLoader?", "input": "", "output": "

Here's what I did for pytorch 0.4.1 (should still work in 1.3)

\n\n
def load_dataset():\n    data_path = 'data/train/'\n    train_dataset = torchvision.datasets.ImageFolder(\n        root=data_path,\n        transform=torchvision.transforms.ToTensor()\n    )\n    train_loader = torch.utils.data.DataLoader(\n        train_dataset,\n        batch_size=64,\n        num_workers=0,\n        shuffle=True\n    )\n    return train_loader\n\nfor batch_idx, (data, target) in enumerate(load_dataset()):\n    #train network\n
\n", "system": "" }, { "instruction": "Pytorch beginner : tensor.new method", "input": "", "output": "

As the documentation of tensor.new() says:

\n\n
\n

Constructs a new tensor of the same data type as self tensor.

\n
\n\n

Also note:

\n\n
\n

For CUDA tensors, this method will create new tensor on the same device as this tensor.

\n
\n", "system": "" }, { "instruction": "What's the reason of the error ValueError: Expected more than 1 value per channel?", "input": "", "output": "

It will fail on batches of size 1 if we use feature-wise batch normalization.

\n\n

As Batch normalization computes:

\n\n
y = (x - mean(x)) / (std(x) + eps)\n
\n\n

If we have one sample per batch then mean(x) = x, and the output will be entirely zero (ignoring the bias). We can't use that for learning...

\n", "system": "" }, { "instruction": "PyTorch equivalence for softmax_cross_entropy_with_logits", "input": "", "output": "
\n

is there an equivalent PyTorch loss function for TensorFlow's softmax_cross_entropy_with_logits?

\n
\n

torch.nn.functional.cross_entropy

\n

This takes logits as inputs (performing log_softmax internally). Here "logits" are just some values that are not probabilities (i.e. not necessarily in the interval [0,1]).

\n

But, logits are also the values that will be converted to probabilities.\nIf you consider the name of the tensorflow function you will understand it is pleonasm (since the with_logits part assumes softmax will be called).

\n

In the PyTorch implementation looks like this:

\n
loss = F.cross_entropy(x, target)\n
\n

Which is equivalent to :

\n
lp = F.log_softmax(x, dim=-1)\nloss = F.nll_loss(lp, target)\n
\n

It is not F.binary_cross_entropy_with_logits because this function assumes multi label classification:

\n
F.sigmoid + F.binary_cross_entropy = F.binary_cross_entropy_with_logits\n
\n

It is not torch.nn.functional.nll_loss either because this function takes log-probabilities (after log_softmax()) not logits.

\n", "system": "" }, { "instruction": "Why does autograd not produce gradient for intermediate variables?", "input": "", "output": "
\n

By default, gradients are only retained for leaf variables. non-leaf variables' gradients are not retained to be inspected later. This was\n done by design, to save memory.

\n
\n\n

-soumith chintala

\n\n

See: https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94

\n\n

Option 1:

\n\n

Call y.retain_grad()

\n\n
x = Variable(torch.Tensor([2]), requires_grad=True)\ny = x * x\nz = y * y\n\ny.retain_grad()\n\nz.backward()\n\nprint(y.grad)\n#Variable containing:\n# 8\n#[torch.FloatTensor of size 1]\n
\n\n

Source: https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/16

\n\n

Option 2:

\n\n

Register a hook, which is basically a function called when that gradient is calculated. Then you can save it, assign it, print it, whatever...

\n\n
from __future__ import print_function\nimport torch\nfrom torch.autograd import Variable\n\nx = Variable(torch.Tensor([2]), requires_grad=True)\ny = x * x\nz = y * y\n\ny.register_hook(print) ## this can be anything you need it to be\n\nz.backward()\n
\n\n

output:

\n\n
Variable containing:  8 [torch.FloatTensor of size 1\n
\n\n

Source: https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/2

\n\n

Also see: https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/7

\n", "system": "" }, { "instruction": "In PyTorch, what exactly does the grad_fn attribute store and how is it used?", "input": "", "output": "

grad_fn is a function "handle", giving access to the applicable gradient function. The gradient at the given point is a coefficient for adjusting weights during back-propagation.

\n

"Handle" is a general term for an object descriptor, designed to give appropriate access to the object. For instance, when you open a file, open returns a file handle. When you instantiate a class, the __init__ function returns a handle to the created instance. The handle contains references (usually memory addresses) to the data and functions for the item in question.

\n

It appears as the generic object class because it's from the underlying implementation in another language, such that it does not map exactly to the Python function type. PyTorch handles the inter-language call and return. This hand-off is part of the pre-complied (shared-object) run-time system.

\n

Is that enough to clarify what you see?

\n", "system": "" }, { "instruction": "ImportError: libc10.so: cannot open shared object file: No such file or directory", "input": "", "output": "

libc10.so is made available by pytorch. Hence first

\n
import torch\n
\n

and then import packages that depend on pytorch.

\n", "system": "" }, { "instruction": "What is running loss in PyTorch and how is it calculated", "input": "", "output": "

It's because the loss given by CrossEntropy or other loss functions is divided by the number of elements i.e. the reduction parameter is mean by default.

\n
torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')\n
\n

Hence, loss.item() contains the loss of entire mini-batch, but divided by the batch size. That's why loss.item() is multiplied with batch size, given by inputs.size(0), while calculating running_loss.

\n", "system": "" }, { "instruction": "AttributeError: module 'torch' has no attribute '_six'. Bert model in Pytorch", "input": "", "output": "

I was getting this erro when I was trying to load CIFAR10 dataset. Simple restart didn't solved it! I found that for some (conda) environment it was working fine.

\n

Updating torch and torchvision helped:

\n
pip install --upgrade torch torchvision\n
\n", "system": "" }, { "instruction": "Autograd.grad() for Tensor in pytorch", "input": "", "output": "

\nLet's start from simple working example with plain loss function and regular backward. We will build short computational graph and do some grad computations on it.

\n\n

Code:

\n\n
import torch\nfrom torch.autograd import grad\nimport torch.nn as nn\n\n\n# Create some dummy data.\nx = torch.ones(2, 2, requires_grad=True)\ngt = torch.ones_like(x) * 16 - 0.5  # \"ground-truths\" \n\n# We will use MSELoss as an example.\nloss_fn = nn.MSELoss()\n\n# Do some computations.\nv = x + 2\ny = v ** 2\n\n# Compute loss.\nloss = loss_fn(y, gt)\n\nprint(f'Loss: {loss}')\n\n# Now compute gradients:\nd_loss_dx = grad(outputs=loss, inputs=x)\nprint(f'dloss/dx:\\n {d_loss_dx}')\n
\n\n

Output:

\n\n
Loss: 42.25\ndloss/dx:\n(tensor([[-19.5000, -19.5000], [-19.5000, -19.5000]]),)\n
\n\n

Ok, this works! Now let's try to reproduce error \"grad can be implicitly created only for scalar outputs\". As you can notice, loss in previous example is a scalar. backward() and grad() by defaults deals with single scalar value: loss.backward(torch.tensor(1.)). If you try to pass tensor with more values you will get an error.

\n\n

Code:

\n\n
v = x + 2\ny = v ** 2\n\ntry:\n    dy_hat_dx = grad(outputs=y, inputs=x)\nexcept RuntimeError as err:\n    print(err)\n
\n\n

Output:

\n\n

grad can be implicitly created only for scalar outputs

\n\n

Therefore, when using grad() you need to specify grad_outputs parameter as follows:

\n\n

Code:

\n\n
v = x + 2\ny = v ** 2\n\ndy_dx = grad(outputs=y, inputs=x, grad_outputs=torch.ones_like(y))\nprint(f'dy/dx:\\n {dy_dx}')\n\ndv_dx = grad(outputs=v, inputs=x, grad_outputs=torch.ones_like(v))\nprint(f'dv/dx:\\n {dv_dx}')\n
\n\n

Output:

\n\n
dy/dx:\n(tensor([[6., 6.],[6., 6.]]),)\n\ndv/dx:\n(tensor([[1., 1.], [1., 1.]]),)\n
\n\n

NOTE: If you are using backward() instead, simply do y.backward(torch.ones_like(y)).

\n", "system": "" }, { "instruction": "pytorch delete model from gpu", "input": "", "output": "

Freeing memory in PyTorch works as it does with the normal Python garbage collector. This means once all references to an Python-Object are gone it will be deleted.

\n\n

You can delete references by using the del operator:

\n\n
del model\n
\n\n

You have to make sure though that there is no reference to the respective object left, otherwise the memory won't be freed.

\n\n

So once you've deleted all references of your model, it should be deleted and the memory freed.

\n\n

If you want to learn more about memory management you can take a look here:\nhttps://pytorch.org/docs/stable/notes/cuda.html#cuda-memory-management

\n", "system": "" }, { "instruction": "AttributeError: module 'torch' has no attribute '_six'. Bert model in Pytorch", "input": "", "output": "

I was getting this erro when I was trying to load CIFAR10 dataset. Simple restart didn't solved it! I found that for some (conda) environment it was working fine.

\n

Updating torch and torchvision helped:

\n
pip install --upgrade torch torchvision\n
\n", "system": "" }, { "instruction": "Pytorch version for cuda 12.2", "input": "", "output": "

You can install the nightly build. Note you should have cudnn installed already, I am using cudnn v8.9.3. The 12.1 PyTorch version works fine with CUDA v12.2.2:

\n

conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch-nightly -c nvidia

\n", "system": "" }, { "instruction": "PyTorch Error loading "\\lib\\site-packages\\torch\\lib\\shm.dll" or one of its dependencies", "input": "", "output": "

pip install torch==2.2.1 solved my problem, but additionally i tried to run this code but it didn't solve conda install cudatoolkit

\n", "system": "" }, { "instruction": "what the difference between att_mask and key_padding_mask in MultiHeadAttnetion", "input": "", "output": "

The key_padding_mask is used to mask out positions that are padding, i.e., after the end of the input sequence. This is always specific to the input batch and depends on how long are the sequence in the batch compared to the longest one. It is a 2D tensor of shape batch size \u00d7 input length.

\n

On the other hand, attn_mask says what key-value pairs are valid. In a Transformer decoder, a triangle mask is used to simulate the inference time and prevent the attending to the "future" positions. This is what att_mask is usually used for. If it is a 2D tensor, the shape is input length \u00d7 input length. You can also have a mask that is specific to every item in a batch. In that case, you can use a 3D tensor of shape (batch size \u00d7 num heads) \u00d7 input length \u00d7 input length. (So, in theory, you can simulate key_padding_mask with a 3D att_mask.)

\n", "system": "" }, { "instruction": "How to add parameters in module class in pytorch custom model?", "input": "", "output": "

You need to register your parameters:

\n
self.register_parameter(name='bias', param=torch.nn.Parameter(torch.randn(3)))\n
\n
\n

Update:
\nIn more recent versions of PyTorch, you no longer need to explicitly register_parameter, it's enough to set a member of your nn.Module with nn.Parameter to "notify" pytorch that this variable should be treated as a trainable parameter:

\n
self.bias = torch.nn.Parameter(torch.randn(3))\n
\n

Please note that is you want to have more complex data structures of parameters (e.g., lists, etc.) you should use dedicated containers like torch.nn.ParameterList or torch.nn.ParameterDict.

\n", "system": "" }, { "instruction": "RuntimeError: output with shape [1, 224, 224] doesn't match the broadcast shape [3, 224, 224]", "input": "", "output": "

The first dimension of the tensor means the color, so what your error means is that you are giving a grayscale picture (1 channel), while the data loader expects a RGB image (3 channels). You defined a pil_loader function that returns an image in RGB, but you are never using it.

\n\n

So you have two options:

\n\n
    \n
  1. Work with the image in Grayscale instead of rgb, which is cheaper computationally speaking. \nSolution: Both in train and test transforms change transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) for transforms.Normalize((0.5), (0.5))

  2. \n
  3. Make sure your image is in rgb. I don't know how your images are stored, but I guess you downloaded the dataset in grayscale. One thing you could try is using the pil_loader function you defines. Try changing img = Image.open(os.path.join(self.root, self.elements[index].rstrip())) for img = pil_loader(os.path.join(self.root, self.elements[index].rstrip())) in yout __getitem__ function.

  4. \n
\n\n

Let me know how it goes!

\n", "system": "" }, { "instruction": "Where is the source code of pytorch conv2d?", "input": "", "output": "

The functional code is implemented in C++. As of version 1.13.1 the entry point into the C++ code for conv2d is at aten/src/ATen/native/Convolution.cpp:804.

\n

If you are interested more generally in how functions are registered to the API then you can take a look at aten/src/ATen/native/README.md. A deeper dive will benefit from understanding some of the design decisions in PyTorch. For example, the dispatcher mechanism (see here). More general information can be found in the PyTorch developer's wiki, though keep in mind this wiki is primarily a tool for contributors and is not as polished as the Python API documentation. IMO a good starting point is the Core Frontend Onboarding page which gives links to most everything needed to get your head around the PyTorch source code.

\n", "system": "" }, { "instruction": "How does the __getitem__'s idx work within PyTorch's DataLoader?", "input": "", "output": "

What defines the idx is the sampler or batch_sampler, as you can see here (open-source projects are your friend). In this code (and comment/docstring) you can see the difference between sampler and batch_sampler. If you look here you'll see how the index is chosen:

\n\n
def __next__(self):\n    index = self._next_index()\n\n# and _next_index is implemented on the base class (_BaseDataLoaderIter)\ndef _next_index(self):\n    return next(self._sampler_iter)\n\n# self._sampler_iter is defined in the __init__ like this:\nself._sampler_iter = iter(self._index_sampler)\n\n# and self._index_sampler is a property implemented like this (modified to one-liner for simplicity):\nself._index_sampler = self.batch_sampler if self._auto_collation else self.sampler\n
\n\n

Pay attention that this is the _SingleProcessDataLoaderIter implementation; you can find the _MultiProcessingDataLoaderIter here (ofc, which one is used depends on the num_workers value, as you can see here). Going back to the samplers, assuming your Dataset is not _DatasetKind.Iterable and that you are not providing a custom sampler, it means you are either using (dataloader.py#L212-L215):

\n\n
if shuffle:\n    sampler = RandomSampler(dataset)\nelse:\n    sampler = SequentialSampler(dataset)\n\nif batch_size is not None and batch_sampler is None:\n    # auto_collation without custom batch_sampler\n    batch_sampler = BatchSampler(sampler, batch_size, drop_last)\n
\n\n

Let's take a look at how the default BatchSampler builds a batch:

\n\n
def __iter__(self):\n    batch = []\n    for idx in self.sampler:\n        batch.append(idx)\n        if len(batch) == self.batch_size:\n            yield batch\n            batch = []\n    if len(batch) > 0 and not self.drop_last:\n        yield batch\n
\n\n

Very simple: it gets indices from the sampler until the desired batch_size is reached.

\n\n

Now the question \"How does the __getitem__'s idx work within PyTorch's DataLoader?\" can be answered by seeing how each default sampler works.

\n\n\n\n
class SequentialSampler(Sampler):\n    def __init__(self, data_source):\n        self.data_source = data_source\n\n    def __iter__(self):\n        return iter(range(len(self.data_source)))\n\n    def __len__(self):\n        return len(self.data_source)\n
\n\n\n\n
def __iter__(self):\n    n = len(self.data_source)\n    if self.replacement:\n        return iter(torch.randint(high=n, size=(self.num_samples,), dtype=torch.int64).tolist())\n    return iter(torch.randperm(n).tolist())\n
\n\n

Therefore, as you did not provide any code, we can only assume:

\n\n
    \n
  1. You are using shuffle=True in your DataLoader or
  2. \n
  3. You are using a custom sampler or
  4. \n
  5. Your Dataset is _DatasetKind.Iterable
  6. \n
\n", "system": "" }, { "instruction": "PyTorch - Getting the 'TypeError: pic should be PIL Image or ndarray. Got <class 'numpy.ndarray'>' error", "input": "", "output": "

This happens because of the transformation you use:

\n\n
self.transform = transforms.Compose([transforms.ToTensor()])\n
\n\n

As you can see in the documentation, torchvision.transforms.ToTensor converts a PIL Image or numpy.ndarray to tensor. So if you want to use this transformation, your data has to be of one of the above types.

\n", "system": "" }, { "instruction": "TypeError: can't convert np.ndarray of type numpy.object_", "input": "", "output": "

It is difficult to answer properly since you do not show us how you try to do it. From your error message I can see that you try to convert a numpy array containing objects to a torch tensor. This does not work, you will need a numeric data type:

\n
import torch\nimport numpy as np\n\n# Your test array without 'dtype=object'\na = np.array([\n    np.array([0.5, 1.0, 2.0], dtype=np.float16),\n    np.array([4.0, 6.0, 8.0], dtype=np.float16),\n])\n\nb = torch.from_numpy(a)\n\nprint(a.dtype) # This should not be 'object'\nprint(b)\n
\n

Output

\n
float16\ntensor([[0.5000, 1.0000, 2.0000],\n        [4.0000, 6.0000, 8.0000]], dtype=torch.float16)\n
\n", "system": "" }, { "instruction": "Indexing a multi-dimensional tensor with a tensor in PyTorch", "input": "", "output": "

A more elegant (and simpler) solution might be to simply cast b as a tuple:

\n\n
a[tuple(b)]\nOut[10]: tensor(5.)\n
\n\n

I was curious to see how this works with \"regular\" numpy, and found a related article explaining this quite well here.

\n", "system": "" }, { "instruction": "Parsing CSV into Pytorch tensors", "input": "", "output": "

Try converting it to an array first:

\n\n
test_tensor = torch.Tensor(test.values)\n
\n", "system": "" }, { "instruction": "How to multiply a matrix by a vector in PyTorch", "input": "", "output": "

You're looking for

\n\n
torch.mv(a,b)\n
\n\n

Note that for the future, you may also find torch.matmul() useful. torch.matmul() infers the dimensionality of your arguments and accordingly performs either dot products between vectors, matrix-vector or vector-matrix multiplication, matrix multiplication or batch matrix multiplication for higher order tensors.

\n", "system": "" }, { "instruction": "Cannot install PyTorch with Python 3.11 (Windows)", "input": "", "output": "

As of now, PyTorch only provides a Linux build with python 3.11\nYou can check all available wheels here.

\n

https://download.pytorch.org/whl/torch/

\n", "system": "" }, { "instruction": "GUnicorn + CUDA: Cannot re-initialize CUDA in forked subprocess", "input": "", "output": "

Reason for the Error

\n

As correctly stated in the comments by @Newbie, the issue isn't the model itself, but the CUDA context. When new child processes are forked, the parent's memory is shared read-only with the child, but the CUDA context doesn't support this sharing, it must be copied to the child. Hence, it reports above-mentioned error.

\n

Spawn instead of Fork

\n

To resolve this issue, we have to change the start method for the child processes from fork to spawn with multiprocessing.set_start_method. The following simple example works fine:

\n
import torch\nimport torch.multiprocessing as mp\n\n\ndef f(y):\n    y[0] = 1000\n\n\nif __name__ == '__main__':\n    x = torch.zeros(1).cuda()\n    x.share_memory_()\n\n    mp.set_start_method('spawn')\n    p = mp.Process(target=f, args=(x,), daemon=True)\n    p.start()\n    p.join()\n    print("x =", x.item())\n
\n

When running this code, a second CUDA context is initialized (this can be observed via watch -n 1 nvidia-smi in a second window), and f is executed after the context was initialized completely. After this, x = 1000.0 is printed on the console, thus, we confirmed that the tensor x was successfully shared between the processes.

\n

However, Gunicorn internally uses os.fork to start the worker processes, so multiprocessing.set_start_method has no influence on Gunicorn's behavior. Consequently, initializing the CUDA context in the root process must be avoided.

\n

Solution for Gunicorn

\n

In order to share the model among the worker processes, we thus must load the model in one single process and share it with the workers. Luckily, sending a CUDA tensor via a torch.multiprocessing.Queue to another process doesn't copy the parameters on the GPU, so we can use those queues for this problem.

\n
import time\n\nimport torch\nimport torch.multiprocessing as mp\n\n\ndef f(q):\n    y = q.get()\n    y[0] = 1000\n\n\ndef g(q):\n    x = torch.zeros(1).cuda()\n    x.share_memory_()\n    q.put(x)\n    q.put(x)\n    while True:\n        time.sleep(1)  # this process must live as long as x is in use\n\n\nif __name__ == '__main__':\n    queue = mp.Queue()\n    pf = mp.Process(target=f, args=(queue,), daemon=True)\n    pf.start()\n    pg = mp.Process(target=g, args=(queue,), daemon=True)\n    pg.start()\n    pf.join()\n    x = queue.get()\n    print("x =", x.item())  # Prints x = 1000.0\n
\n

For the Gunicorn server, we can use the same strategy: A model server process loads the model and serves it to each new worker process after its fork. In the post_fork hook the worker requests and receives the model from the model server. A Gunicorn configuration could look like this:

\n
import logging\n\nfrom client import request_model\nfrom app import app\n\nlogging.basicConfig(level=logging.INFO)\n\nbind = "localhost:8080"\nworkers = 1\nzmq_url = "tcp://127.0.0.1:5555"\n\n\ndef post_fork(server, worker):\n    app.config['MODEL'], app.config['COUNTER'] = request_model(zmq_url)\n
\n

In the post_fork hook, we call request_model to get a model from the model server and store the model in the configuration of the Flask application. The method request_model is defined in my example in the file client.py and defined as follows:

\n
import logging\nimport os\n\nfrom torch.multiprocessing.reductions import ForkingPickler\nimport zmq\n\n\ndef request_model(zmq_url: str):\n    logging.info("Connecting")\n    context = zmq.Context()\n    with context.socket(zmq.REQ) as socket:\n        socket.connect(zmq_url)\n        logging.info("Sending request")\n        socket.send(ForkingPickler.dumps(os.getpid()))\n        logging.info("Waiting for a response")\n        model = ForkingPickler.loads(socket.recv())\n    logging.info("Got response from object server")\n    return model\n
\n

We make use of ZeroMQ for inter-process communication here because it allows us to reference servers by name/address and to outsource the server code into its own application. multiprocessing.Queue and multiprocessing.Process apparently don't work well with Gunicorn. multiprocessing.Queue uses the ForkingPickler internally to serialize the objects, and the module torch.multiprocessing alters it in a way that Torch data structures can be serialized appropriately and reliably. So, we use this class to serialize our model to send it to the worker processes.

\n

The model is loaded and served in an application that is completely separate from Gunicorn and defined in server.py:

\n
from argparse import ArgumentParser\nimport logging\n\nimport torch\nfrom torch.multiprocessing.reductions import ForkingPickler\nimport zmq\n\n\ndef load_model():\n    model = torch.nn.Linear(10000, 50000)\n    model.cuda()\n    model.share_memory()\n\n    counter = torch.zeros(1).cuda()\n    counter.share_memory_()\n    return model, counter\n\n\ndef share_object(obj, url):\n    context = zmq.Context()\n    socket = context.socket(zmq.REP)\n    socket.bind(url)\n    while True:\n        logging.info("Waiting for requests on %s", url)\n        message = socket.recv()\n        logging.info("Got a message from %d", ForkingPickler.loads(message))\n        socket.send(ForkingPickler.dumps(obj))\n\n\nif __name__ == '__main__':\n    parser = ArgumentParser(description="Serve model")\n    parser.add_argument("--listen-address", default="tcp://127.0.0.1:5555")\n    args = parser.parse_args()\n\n    logging.basicConfig(level=logging.INFO)\n    logging.info("Loading model")\n    model = load_model()\n    share_object(model, args.listen_address)\n
\n

For this test, we use a model of about 2GB in size to see an effect on the GPU memory allocation in nvidia-smi and a small tensor to verify that the data is actually shared among the processes.

\n

Our sample flask application runs the model with a random input, counts the number of requests and returns both results:

\n
from flask import Flask\nimport torch\n\napp = Flask(__name__)\n\n\n@app.route("/", methods=["POST"])\ndef infer():\n    model: torch.nn.Linear = app.config['MODEL']\n    counter: torch.Tensor = app.config['COUNTER']\n    counter[0] += 1  # not thread-safe\n    input_features = torch.rand(model.in_features).cuda()\n    return {\n        "result": model(input_features).sum().item(),\n        "counter": counter.item()\n    }\n
\n

Test

\n

The example can be run as follows:

\n
$ python server.py &\nINFO:root:Waiting for requests on tcp://127.0.0.1:5555 \n$ gunicorn -c config.py app:app\n[2023-02-01 16:45:34 +0800] [24113] [INFO] Starting gunicorn 20.1.0\n[2023-02-01 16:45:34 +0800] [24113] [INFO] Listening at: http://127.0.0.1:8080 (24113)\n[2023-02-01 16:45:34 +0800] [24113] [INFO] Using worker: sync\n[2023-02-01 16:45:34 +0800] [24186] [INFO] Booting worker with pid: 24186\nINFO:root:Connecting\nINFO:root:Sending request\nINFO:root:Waiting for a response\nINFO:root:Got response from object server\n
\n

Using nvidia-smi, we can observe that now, two processes are using the GPU, and one of them allocates 2GB more VRAM than the other. Querying the flask application also works as expected:

\n
$ curl -X POST localhost:8080\n{"counter":1.0,"result":-23.956459045410156} \n$ curl -X POST localhost:8080\n{"counter":2.0,"result":-8.161510467529297}\n$ curl -X POST localhost:8080\n{"counter":3.0,"result":-37.823692321777344}\n
\n

Let's introduce some chaos and terminate our only Gunicorn worker:

\n
$ kill 24186\n[2023-02-01 18:02:09 +0800] [24186] [INFO] Worker exiting (pid: 24186)\n[2023-02-01 18:02:09 +0800] [4196] [INFO] Booting worker with pid: 4196\nINFO:root:Connecting\nINFO:root:Sending request\nINFO:root:Waiting for a response\nINFO:root:Got response from object server\n
\n

It's restarting properly and ready to answer our requests.

\n

Benefit

\n

Initially, the amount of required VRAM for our service was (SizeOf(Model) + SizeOf(CUDA context)) * Num(Workers). By sharing the weights of the model, we can reduce this by SizeOf(Model) * (Num(Workers) - 1) to SizeOf(Model) + SizeOf(CUDA context) * Num(Workers).

\n

Caveats

\n

The reliability of this approach relies on the single model server process. If that process terminates, not only will newly started workers get stuck, but the models in the existing workers will become unavailable and all workers crash at once. The shared tensors/models are only available as long as the server process is running. Even if the model server and Gunicorn workers are restarted, a short outage is certainly unavoidable. In a production environment, you thus should make sure this server process is kept alive.

\n

Additionally, sharing data among different processes can have side effects. When sharing changeable data, proper locks must be used to avoid race conditions.

\n", "system": "" }, { "instruction": "difference between Dataset and TensorDataset in pyTorch", "input": "", "output": "

The Dataset class is an abstract class that is used to define new types of (customs) datasets. Instead, the TensorDataset is a ready to use class to represent your data as list of tensors.

\n

You can define your custom dataset in the following way:

\n
class CustomDataset(torch.utils.data.Dataset):\n\n  def __init__(self, *args, **kwargs):\n    super().__init__(*args, **kwargs)\n    # Your code\n\n    self.instances = your_data\n\n  def __getitem__(self, idx):\n    return self.instances[idx] # In case you stored your data on a list called instances\n\n  def __len__(self):\n    return len(self.instances)\n
\n

If you just want to create a dataset that contains tensors for input features and labels, then use the TensorDataset directly:

\n
dataset = TensorDataset(input_features, labels)\n
\n

Note that input_features and labels must match on the length of the first dimension.

\n", "system": "" }, { "instruction": "PyTorch: RuntimeError: Input, output and indices must be on the current device", "input": "", "output": "

You should put your model on the device, which is probably cuda:

\n
device = "cuda:0"\nmodel = model.to(device)\n \n
\n

Then make sure the inputs of the model(input) are on the same device as well:

\n
input = input.to(device)\n
\n

It should work!

\n", "system": "" }, { "instruction": "What is tape-based autograd in Pytorch?", "input": "", "output": "

There are different types of automatic differentiation e.g. forward-mode, reverse-mode, hybrids; (more explanation). The tape-based autograd in Pytorch simply refers to the uses of reverse-mode automatic differentiation, source. The reverse-mode auto diff is simply a technique used to compute gradients efficiently and it happens to be used by backpropagation, source.

\n
\n

Now, in PyTorch, Autograd is the core torch package for automatic differentiation. It uses a tape-based system for automatic differentiation. In the forward phase, the autograd tape will remember all the operations it executed, and in the backward phase, it will replay the operations.

\n

Same in TensorFlow, to differentiate automatically, It also needs to remember what operations happen in what order during the forward pass. Then, during the backward pass, TensorFlow traverses this list of operations in reverse order to compute gradients. Now, TensorFlow provides the tf.GradientTape API for automatic differentiation; that is, computing the gradient of computation with respect to some inputs, usually tf.Variables. TensorFlow records relevant operations executed inside the context of a tf.GradientTape onto a tape. TensorFlow then uses that tape to compute the gradients of a recorded computation using reverse mode differentiation.

\n

So, as we can see from the high-level viewpoint, both are doing the same operation. However, during the custom training loop, the forward pass and calculation of the loss are more explicit in TensorFlow as it uses tf.GradientTape API scope whereas in PyTorch it's implicit for these operations but it requires to set required_grad flags to False temporarily while updating the training parameters (weights and biases). For that, it uses torch.no_grad API explicitly. In other words, TensorFlow's tf.GradientTape() is similar to PyTorch's loss.backward(). Below is the simplistic form in the code of the above statements.

\n
# TensorFlow \n[w, b] = tf_model.trainable_variables\nfor epoch in range(epochs):\n  with tf.GradientTape() as tape:\n    # forward passing and loss calculations \n    # within explicit tape scope \n    predictions = tf_model(x)\n    loss = squared_error(predictions, y)\n\n  # compute gradients (grad)\n  w_grad, b_grad = tape.gradient(loss, tf_model.trainable_variables)\n\n  # update training variables \n  w.assign(w - w_grad * learning_rate)\n  b.assign(b - b_grad * learning_rate)\n\n\n# PyTorch \n[w, b] = torch_model.parameters()\nfor epoch in range(epochs):\n  # forward pass and loss calculation \n  # implicit tape-based AD \n  y_pred = torch_model(inputs)\n  loss = squared_error(y_pred, labels)\n\n  # compute gradients (grad)\n  loss.backward()\n  \n  # update training variables / parameters  \n  with torch.no_grad():\n    w -= w.grad * learning_rate\n    b -= b.grad * learning_rate\n    w.grad.zero_()\n    b.grad.zero_()\n
\n

FYI, in the above, the trainable variables (w, b) are manually updated in both frameworks but we generally use an optimizer (e.g. adam) to do the job.

\n
# TensorFlow \n# ....\n# update training variables \noptimizer.apply_gradients(zip([w_grad, b_grad], model.trainable_weights))\n\n# PyTorch\n# ....\n# update training variables / parameters\noptimizer.step()\noptimizer.zero_grad()\n
\n", "system": "" }, { "instruction": "Using GPU inside docker container - CUDA Version: N/A and torch.cuda.is_available returns False", "input": "", "output": "

For anybody arriving here looking how to do it with docker compose, add to your service:

\n
deploy:\n  resources:\n    reservations:\n      devices:\n      - driver: nvidia\n        capabilities:\n          - gpu\n          - utility # nvidia-smi\n          - compute # CUDA\n          - video   # NVDEC/NVENC/NVCUVID. For instance to use a hardware accelerated ffmpeg. Skip it if you don't need it\n
\n

Note that, if the environment variable NVIDIA_DRIVER_CAPABILITIES is empty or unset, the container will use the default driver capabilities, which are utility and compute. If it's set to ALL, the container will use all the driver capabilities, but docker compose will still require you to set the capabilities in the docker-compose.yml, such as:

\n
deploy:\n  resources:\n    reservations:\n      devices:\n      - driver: nvidia\n        capabilities: # always required, whatever the value of NVIDIA_DRIVER_CAPABILITIES\n          - gpu\n
\n

You also need to use a nvidia/cuda image.

\n

Doc: https://docs.docker.com/compose/gpu-support

\n

You can find a list of the driver capabilities here: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/docker-specialized.html#driver-capabilities

\n", "system": "" }, { "instruction": "Using pytorch Cuda on MacBook Pro", "input": "", "output": "

PyTorch now supports training using Metal.

\n

Announcement: https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/

\n

To get started, install the latest nightly build of PyTorch: https://pytorch.org/get-started/locally/

\n
\n

Answer pre May 2022

\n

Unfortunately, no GPU acceleration is available when using Pytorch on macOS. CUDA has not available on macOS for a while and it only runs on NVIDIA GPUs. AMDs equivalent library ROCm requires Linux.

\n

If you are working with macOS 12.0 or later and would be willing to use TensorFlow instead, you can use the Mac optimized build of TensorFlow, which supports GPU training using Apple's own GPU acceleration library Metal.

\n

Currently, you need Python 3.8 (<=3.7 and >=3.9 don't work) to run it. To install, run:

\n
pip3 install tensorflow-macos\npip3 install tensorflow-metal\n
\n

You may need to uninstall existing tensorflow distributions first or work in a virtual environment.

\n

Then you can just

\n
import tensorflow as tf\n\ntf.test.is_gpu_available()  # should return True\n
\n", "system": "" }, { "instruction": "k-fold cross validation using DataLoaders in PyTorch", "input": "", "output": "

I just wrote a cross validation function work with dataloader and dataset.\nHere is my code, hope this is helpful.

\n
# define a cross validation function\ndef crossvalid(model=None,criterion=None,optimizer=None,dataset=None,k_fold=5):\n    \n    train_score = pd.Series()\n    val_score = pd.Series()\n    \n    total_size = len(dataset)\n    fraction = 1/k_fold\n    seg = int(total_size * fraction)\n    # tr:train,val:valid; r:right,l:left;  eg: trrr: right index of right side train subset \n    # index: [trll,trlr],[vall,valr],[trrl,trrr]\n    for i in range(k_fold):\n        trll = 0\n        trlr = i * seg\n        vall = trlr\n        valr = i * seg + seg\n        trrl = valr\n        trrr = total_size\n        # msg\n#         print("train indices: [%d,%d),[%d,%d), test indices: [%d,%d)" \n#               % (trll,trlr,trrl,trrr,vall,valr))\n        \n        train_left_indices = list(range(trll,trlr))\n        train_right_indices = list(range(trrl,trrr))\n        \n        train_indices = train_left_indices + train_right_indices\n        val_indices = list(range(vall,valr))\n        \n        train_set = torch.utils.data.dataset.Subset(dataset,train_indices)\n        val_set = torch.utils.data.dataset.Subset(dataset,val_indices)\n        \n#         print(len(train_set),len(val_set))\n#         print()\n        \n        train_loader = torch.utils.data.DataLoader(train_set, batch_size=50,\n                                          shuffle=True, num_workers=4)\n        val_loader = torch.utils.data.DataLoader(val_set, batch_size=50,\n                                          shuffle=True, num_workers=4)\n        train_acc = train(res_model,criterion,optimizer,train_loader,epoch=1)\n        train_score.at[i] = train_acc\n        val_acc = valid(res_model,criterion,optimizer,val_loader)\n        val_score.at[i] = val_acc\n    \n    return train_score,val_score\n        \n\ntrain_score,val_score = crossvalid(res_model,criterion,optimizer,dataset=tiny_dataset)\n\n
\n
\n

In order to give an intuition of correctness for what we are doing, see the output below:

\n
train indices: [0,0),[3600,18000), test indices: [0,3600)\n14400 3600\n\ntrain indices: [0,3600),[7200,18000), test indices: [3600,7200)\n14400 3600\n\ntrain indices: [0,7200),[10800,18000), test indices: [7200,10800)\n14400 3600\n\ntrain indices: [0,10800),[14400,18000), test indices: [10800,14400)\n14400 3600\n\ntrain indices: [0,14400),[18000,18000), test indices: [14400,18000)\n14400 3600\n
\n", "system": "" }, { "instruction": "pytorch RuntimeError: Expected object of scalar type Double but got scalar type Float", "input": "", "output": "

In short: your data has type double but your model has type float, this is not allowed in pytorch because only data with the same dtype can be fed into the model.

\n

In long:\nThis issue is related to the default dtype of PyTorch and Numpy. I will first explain why this error happens and then suggest some solutions(but I think you will not need my solution once you understand the principle.)

\n\n
    \n
  1. torch.float32(aka torch.float)
  2. \n
  3. torch.float64(aka torch.double)
  4. \n
\n\n

Then we need to talk about Numpy:

\n\n

I think now the issue is pretty clear, you have a model whose parameters are of torch.float32(aka torch.float) but tries to run it on data of torch.float64(aka torch.double). This is also what the error message tries to say:Expected object of scalar type Double but got scalar type Float for argument

\n

Solutions:

\n
    \n
  1. You have alreay found one: convert your data to torch.float32 by calling tensor.float()
  2. \n
  3. You can also specify the dtype when load the data: np.loadtxt(file_name,delimiter = ',',dtype="float32")
  4. \n
\n", "system": "" }, { "instruction": "How does torch.distributed.barrier() work", "input": "", "output": "

First you need to understand the ranks. To be brief: in a multiprocessing context we typically assume that rank 0 is the first process or base process. The other processes are then ranked differently, e.g. 1, 2, 3, totalling four processes in total.

\n

Some operations are not necessary to be done in parallel or you just need one process to do some preprocessing or caching so that the other processes can use that data.

\n

In your example, if the first if statement is entered by the non-base processes (rank 1, 2, 3), they will block (or "wait") because they run into the barrier. They wait there, because barrier() blocks until all processes have reached a barrier, but the base process has not reached a barrier yet.

\n

So at this point the non-base processes (1, 2, 3) are blocked, but the base process (0) continues. The base process will do some operations (preprocess and cache data, in this case) until it reaches the second if-statement. There, the base process will run into a barrier. At this point, all processes have stopped at a barrier, meaning that all current barriers can be lifted and all processes can continue. Because the base process prepared the data, the other processes can now use that data.

\n

Perhaps the most important thing to understand is:

\n\n", "system": "" }, { "instruction": "RuntimeError: module must have its parameters and buffers on device cuda:1 (device_ids[0]) but found one of them on device: cuda:2", "input": "", "output": "

DataParallel requires every input tensor be provided on the first device in its device_ids list.

\n\n

It basically uses that device as a staging area before scattering to the other GPUs and it's the device where final outputs are gathered before returning from forward. If you want device 2 to be the primary device then you just need to put it at the front of the list as follows

\n\n
model = nn.DataParallel(model, device_ids = [2, 0, 1, 3])\nmodel.to(f'cuda:{model.device_ids[0]}')\n
\n\n

After which all tensors provided to model should be on the first device as well.

\n\n
x = ... # input tensor\nx = x.to(f'cuda:{model.device_ids[0]}')\ny = model(x)\n
\n", "system": "" }, { "instruction": "Tensorflow 2.0 dataset and dataloader", "input": "", "output": "

When using the tf.data API, you will usually also make use of the map function.

\n\n

In PyTorch, your __getItem__ call basically fetches an element from your data structure given in __init__ and transforms it if necessary.

\n\n

In TF2.0, you do the same by initializing a Dataset using one of the Dataset.from_... functions (see from_generator, from_tensor_slices, from_tensors); this is essentially the __init__ part of a PyTorch Dataset. Then, you can call map to do the element-wise manipulations you would have in __getItem__.

\n\n

Tensorflow datasets are pretty much fancy iterators, so by design you don't access their elements using indices, but rather by traversing them.

\n\n

The guide on tf.data is very useful and provides a wide variety of examples.

\n", "system": "" }, { "instruction": "PyTorch get all layers of model", "input": "", "output": "

You can iterate over all modules of a model (including those inside each Sequential) with the modules() method. Here's a simple example:

\n
>>> model = nn.Sequential(nn.Linear(2, 2), \n                          nn.ReLU(),\n                          nn.Sequential(nn.Linear(2, 1),\n                          nn.Sigmoid()))\n\n>>> l = [module for module in model.modules() if not isinstance(module, nn.Sequential)]\n\n>>> l\n\n[Linear(in_features=2, out_features=2, bias=True),\n ReLU(),\n Linear(in_features=2, out_features=1, bias=True),\n Sigmoid()]\n
\n", "system": "" }, { "instruction": "Impact of using data shuffling in Pytorch dataloader", "input": "", "output": "

Yes it totally can affect the result! Shuffling the order of the data that we use to fit the classifier is so important, as the batches between epochs do not look alike.

\n

Checking the Data Loader Documentation it says:\n"shuffle (bool, optional) \u2013 set to True to have the data reshuffled at every epoch"

\n

In any case, it will make the model more robust and avoid over/underfitting.

\n

In your case this heavy increase of accuracy (from the lack of awareness of the dataset) probably is due to how the dataset is "organised" as maybe, as an example, each category goes to a different batch, and in every epoch, a batch contains the same category, which derives to a very bad accuracy when you are testing.

\n", "system": "" }, { "instruction": "Implementing dropout from scratch", "input": "", "output": "
\n

It seems I've implemented the dropout function incorrectly?

\n
np.random.binomial([np.ones((len(input),np.array(list(input.shape))))],1 dropout_percent)[0] * (1.0/(1-self.p))\n
\n
\n

In fact, the above implementation is known as Inverted Dropout. Inverted Dropout is how Dropout is implemented in practice in the various deep learning frameworks.

\n

What is inverted dropout?

\n

Before jump into the inverted dropout, it can be helpful to see how Dropout works for a single neuron:

\n

\"\"

\n

Since during train phase a neuron is kept on with probability q (=1-p), during the testing phase we have to emulate the behavior of the ensemble of networks used in the training phase. To this end, the authors suggest scaling the activation function by a factor of q during the test phase in order to use the expected output produced in the training phase as the single output required in the test phase (Section 10, Multiplicative Gaussian Noise). Thus:

\n

\"\"

\n

Inverted dropout is a bit different. This approach consists in the scaling of the activations during the training phase, leaving the test phase untouched. The scale factor is the inverse of the keep probability 1/1-p = 1/q, thus:

\n

\"enter

\n

Inverted dropout helps to define the model once and just change a parameter (the keep/drop probability) to run train and test on the same model. Direct Dropout, instead, force you to modify the network during the test phase because if you don\u2019t multiply by q the output the neuron will produce values that are higher respect to the one expected by the successive neurons (thus the following neurons can saturate or explode): that\u2019s why Inverted Dropout is the more common implementation.

\n

References:

\n\n
\n

How implement inverted dropout Pytorch?

\n
class MyDropout(nn.Module):\n    def __init__(self, p: float = 0.5):\n        super(MyDropout, self).__init__()\n        if p < 0 or p > 1:\n            raise ValueError("dropout probability has to be between 0 and 1, " "but got {}".format(p))\n        self.p = p\n\n    def forward(self, X):\n        if self.training:\n            binomial = torch.distributions.binomial.Binomial(probs=1-self.p)\n            return X * binomial.sample(X.size()) * (1.0/(1-self.p))\n        return X\n
\n

How to implement in Numpy?

\n
import numpy as np\n\npKeep = 0.8\nweights = np.ones([1, 5])\nbinary_value = np.random.rand(weights.shape[0], weights.shape[1]) < pKeep\nres = np.multiply(weights, binary_value)\nres /= pKeep  # this line is called inverted dropout technique\nprint(res)\n
\n

How to implement in Tensorflow?

\n
import tensorflow as tf\ntf.enable_eager_execution()\n\nweights = tf.ones(shape=[1, 5])\nkeep_prob = 0.8\nrandom_tensor = keep_prob\nrandom_tensor += tf.random_uniform(weights.shape)\n# 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob)\nbinary_tensor = tf.floor(random_tensor)\nret = tf.div(weights, keep_prob) * binary_tensor\nprint(ret)\n
\n", "system": "" }, { "instruction": "Multi label classification in pytorch", "input": "", "output": "

You are looking for torch.nn.BCELoss. Here's example code:

\n
import torch\n\nbatch_size = 2\nnum_classes = 11\n\nloss_fn = torch.nn.BCELoss()\n\noutputs_before_sigmoid = torch.randn(batch_size, num_classes)\nsigmoid_outputs = torch.sigmoid(outputs_before_sigmoid)\ntarget_classes = torch.randint(0, 2, (batch_size, num_classes))  # randints in [0, 2).\n\nloss = loss_fn(sigmoid_outputs, target_classes)\n\n# alternatively, use BCE with logits, on outputs before sigmoid.\nloss_fn_2 = torch.nn.BCEWithLogitsLoss()\nloss2 = loss_fn_2(outputs_before_sigmoid, target_classes)\nassert loss == loss2\n
\n", "system": "" }, { "instruction": "Generating new images with PyTorch", "input": "", "output": "

The code from your example (https://github.com/davidsonmizael/gan) gave me the same noise as you show. The loss of the generator decreased way too quickly.

\n\n

There were a few things buggy, I'm not even sure anymore what - but I guess it's easy to figure out the differences yourself. For a comparison, also have a look at this tutorial:\nGANs in 50 lines of PyTorch

\n\n
.... same as your code\nprint(\"# Starting generator and descriminator...\")\nnetG = G()\nnetG.apply(weights_init)\n\nnetD = D()\nnetD.apply(weights_init)\n\nif torch.cuda.is_available():\n    netG.cuda()\n    netD.cuda()\n\n#training the DCGANs\ncriterion = nn.BCELoss()\noptimizerD = optim.Adam(netD.parameters(), lr = 0.0002, betas = (0.5, 0.999))\noptimizerG = optim.Adam(netG.parameters(), lr = 0.0002, betas = (0.5, 0.999))\n\nepochs = 25\n\ntimeElapsed = []\nfor epoch in range(epochs):\n    print(\"# Starting epoch [%d/%d]...\" % (epoch, epochs))\n    for i, data in enumerate(dataloader, 0):\n        start = time.time()\n        time.clock()  \n\n        #updates the weights of the discriminator nn\n        netD.zero_grad()\n\n        #trains the discriminator with a real image\n        real, _ = data\n\n        if torch.cuda.is_available():\n            inputs = Variable(real.cuda()).cuda()\n            target = Variable(torch.ones(inputs.size()[0]).cuda()).cuda()\n        else:\n            inputs = Variable(real)\n            target = Variable(torch.ones(inputs.size()[0]))\n\n        output = netD(inputs)\n        errD_real = criterion(output, target)\n        errD_real.backward() #retain_graph=True\n\n        #trains the discriminator with a fake image\n        if torch.cuda.is_available():\n            D_noise = Variable(torch.randn(inputs.size()[0], 100, 1, 1).cuda()).cuda()\n            target = Variable(torch.zeros(inputs.size()[0]).cuda()).cuda()\n        else:\n            D_noise = Variable(torch.randn(inputs.size()[0], 100, 1, 1))\n            target = Variable(torch.zeros(inputs.size()[0]))\n        D_fake = netG(D_noise).detach()\n        D_fake_ouput = netD(D_fake)\n        errD_fake = criterion(D_fake_ouput, target)\n        errD_fake.backward()\n\n        # NOT:backpropagating the total error\n        # errD = errD_real + errD_fake\n\n        optimizerD.step()\n\n    #for i, data in enumerate(dataloader, 0):\n\n        #updates the weights of the generator nn\n        netG.zero_grad()\n\n        if torch.cuda.is_available():\n            G_noise = Variable(torch.randn(inputs.size()[0], 100, 1, 1).cuda()).cuda()\n            target = Variable(torch.ones(inputs.size()[0]).cuda()).cuda()\n        else:\n            G_noise = Variable(torch.randn(inputs.size()[0], 100, 1, 1))\n            target = Variable(torch.ones(inputs.size()[0]))\n\n        fake = netG(G_noise)\n        G_output = netD(fake)\n        errG  = criterion(G_output, target)\n\n        #backpropagating the error\n        errG.backward()\n        optimizerG.step()\n\n\n        if i % 50 == 0:\n            #prints the losses and save the real images and the generated images\n            print(\"# Progress: \")\n            print(\"[%d/%d][%d/%d] Loss_D: %.4f Loss_G: %.4f\" % (epoch, epochs, i, len(dataloader), errD_real.data[0], errG.data[0]))\n\n            #calculates the remaining time by taking the avg seconds that every loop\n            #and multiplying by the loops that still need to run\n            timeElapsed.append(time.time() - start)\n            avg_time = (sum(timeElapsed) / float(len(timeElapsed)))\n            all_dtl = (epoch * len(dataloader)) + i\n            rem_dtl = (len(dataloader) - i) + ((epochs - epoch) * len(dataloader))\n            remaining =  (all_dtl - rem_dtl) * avg_time\n            print(\"# Estimated remaining time: %s\" % (time.strftime(\"%H:%M:%S\", time.gmtime(remaining))))\n\n        if i % 100 == 0:\n            vutils.save_image(real, \"%s/real_samples.png\" % \"./results\", normalize = True)\n            vutils.save_image(fake.data, \"%s/fake_samples_epoch_%03d.png\" % (\"./results\", epoch), normalize = True)\n\nprint (\"# Finished.\")\n
\n\n

Result after 25 epochs (batchsize 256) on CIFAR-10:\n\"enter

\n", "system": "" }, { "instruction": "Torch: How to shuffle a tensor by its rows?", "input": "", "output": "

I also faced a similar issue. In the documentation, there is no shuffle function for tensors (there are for dataset loaders). I found a workaround to the problem using torch.randperm.

\n\n
>>> a=torch.rand(3,5)\n>>> print(a)\ntensor([[0.4896, 0.3708, 0.2183, 0.8157, 0.7861],\n        [0.0845, 0.7596, 0.5231, 0.4861, 0.9237],\n        [0.4496, 0.5980, 0.7473, 0.2005, 0.8990]])\n>>> # Row shuffling\n... \n>>> a=a[torch.randperm(a.size()[0])]\n>>> print(a)\ntensor([[0.4496, 0.5980, 0.7473, 0.2005, 0.8990],\n        [0.0845, 0.7596, 0.5231, 0.4861, 0.9237],\n        [0.4896, 0.3708, 0.2183, 0.8157, 0.7861]])\n>>> # column shuffling\n... \n>>> a=a[:,torch.randperm(a.size()[1])]\n>>> print(a)\ntensor([[0.2005, 0.7473, 0.5980, 0.8990, 0.4496],\n        [0.4861, 0.5231, 0.7596, 0.9237, 0.0845],\n        [0.8157, 0.2183, 0.3708, 0.7861, 0.4896]])\n
\n\n

I hope it answers the question!

\n", "system": "" }, { "instruction": "LSTM time sequence generation using PyTorch", "input": "", "output": "

This is normal behaviour and happens because your network is too confident of the quality of the input and doesn't learn to rely on the past (on it's internal state) enough, relying soley on the input. When you apply the network to its own output in the generation setting, the input to the network is not as reliable as it was in the training or validation case where it got the true input.

\n\n

I have two possible solutions for you:

\n\n\n", "system": "" }, { "instruction": "TRANSFORMERS: Asking to pad but the tokenizer does not have a padding token", "input": "", "output": "

kkgarg idea was right, but you also need to update your model token embeding size.\nSo, the code will be:

\n
tokenizer = AutoTokenizer.from_pretrained(pretrained_weights)\nmodel = TFAutoModel.from_pretrained(pretrained_weights)\nif tokenizer.pad_token is None:\n    tokenizer.add_special_tokens({'pad_token': '[PAD]'})\n    model.resize_token_embeddings(len(tokenizer))\n
\n

Check this related issue.

\n", "system": "" }, { "instruction": "How to make VScode launch.json for a Python module", "input": "", "output": "

Specify the module you want to run with "module": "torch.distributed.launch"

\n

You can ignore the -m flag. Put everything else under the args key.

\n

Note: Make sure to include --nproc_per_node and the name of file (main_swav.py) in the list of arguments

\n
{\n    "version": "0.2.0",\n    "configurations": [\n        {\n            "name": "Python: Current File",\n            "type": "debugpy",\n            "module": "torch.distributed.launch",\n            "request": "launch",\n            "console": "integratedTerminal",\n            "args": [\n                "--nproc_per_node", "1", \n                "main_swav.py",\n                "--data_path", "/dataset/imagenet/train",\n            ]\n        }\n    ]\n}\n
\n

Read more here: https://code.visualstudio.com/docs/python/debugging#_module

\n", "system": "" }, { "instruction": "RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces)", "input": "", "output": "

SOLUTION: Just replace the view() function with reshape() function as suggested in the error and it works.

\n

I guess this has to do with how the tensor is stored in memory.

\n", "system": "" }, { "instruction": "The size of tensor a (707) must match the size of tensor b (512) at non-singleton dimension 1", "input": "", "output": "

This is because, BERT uses word-piece tokenization. So, when some of the words are not in the vocabulary, it splits the words to it's word pieces. For example: if the word playing is not in the vocabulary, it can split down to play, ##ing. This increases the amount of tokens in a given sentence after tokenization.\nYou can specify certain parameters to get fixed length tokenization:

\n

tokenized_sentence = tokenizer.encode(test_sentence, padding=True, truncation=True,max_length=50, add_special_tokens = True)

\n", "system": "" }, { "instruction": "version `GLIBC_2.28' not found", "input": "", "output": "
\n

So is it possible to install GLIBC_2.28 on my machine?

\n
\n\n

It is possible, but the chances of you making a mistake and rendering your system un-bootable are quite high. It is also very likely that doing so will break something else on your system (this is the reason distributions do not usually update the version of GLIBC from the one they originally shipped with).

\n\n

A much better solution is to built PyTorch targeting your system (i.e. using your \"normal\" toolchain).

\n\n

P.S. GLIBCXX has nothing to do with your problem, and just adds noise to your question.

\n", "system": "" }, { "instruction": "PyTorch Model Training: RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR", "input": "", "output": "

The error RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR is notoriously difficult to debug, but surprisingly often it's an out of memory problem. Usually, you would get the out of memory error, but depending on where it occurs, PyTorch cannot intercept the error and therefore not provide a meaningful error message.

\n\n

A memory issue seems to be likely in your case, because you are using a while loop until the agent is done, which might take long enough that you run out of memory, it's just a matter of time. That can also possibly occur rather late, once the model's parameters in combination with a certain input is unable to finish in time.

\n\n

You can avoid that scenario by limiting the number of allowed actions instead of hoping that the actor will be done in a reasonable time.

\n\n

What you also need to be careful about, is that you don't occupy unnecessary memory. A common mistake is to keep computing gradients of the past states in future iterations. The state from the last iteration should be considered constant, since the current action should not affect past actions, therefore no gradients are required. This is usually achieved by detaching the state from the computational graph for the next iteration, e.g. state = state_.detach(). Maybe you are already doing that, but without the code it's impossible to tell.

\n\n

Similarly, if you keep a history of the states, you should detach them and even more importantly put them on the CPU, i.e. history.append(state.detach().cpu()).

\n", "system": "" }, { "instruction": "what does dim=-1 or -2 mean in torch.sum()?", "input": "", "output": "

A tensor has multiple dimensions, ordered as in the following figure.\nThere is a forward and backward indexing. Forward indexing uses positive integers, backward indexing uses negative integers.

\n

Example:

\n

-1 will be the last one, in our case it will be dim=2

\n

-2 will be dim=1

\n

-3 will be dim=0

\n

\"enter

\n", "system": "" }, { "instruction": "How to get entire dataset from dataloader in PyTorch", "input": "", "output": "

You can set batch_size = len(dataset). Beware, this might require a lot of memory depending upon your dataset.

\n", "system": "" }, { "instruction": "How to install nvidia apex on Google Colab", "input": "", "output": "

(wanted to just add a comment but I don't have enough reputation...)

\n\n

it works for me but the cd is actually not required. Also, I needed the two global options as suggested here: https://github.com/NVIDIA/apex/issues/86

\n\n
%%writefile setup.sh\n\ngit clone https://github.com/NVIDIA/apex\npip install -v --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" ./apex\n
\n\n

then

\n\n
!sh setup.sh\n
\n", "system": "" }, { "instruction": "img should be PIL Image. Got <class 'torch.Tensor'>", "input": "", "output": "

transforms.RandomHorizontalFlip() works on PIL.Images, not torch.Tensor. In your code above, you are applying transforms.ToTensor() prior to transforms.RandomHorizontalFlip(), which results in tensor.

\n\n

But, as per the official pytorch documentation here,

\n\n
\n

transforms.RandomHorizontalFlip() horizontally flip the given PIL\n Image randomly with a given probability.

\n
\n\n

So, just change the order of your transformation in above code, like below:

\n\n
train_transforms = transforms.Compose([transforms.Resize(255), \n                                       transforms.CenterCrop(224),  \n                                       transforms.RandomHorizontalFlip(),\n                                       transforms.ToTensor(), \n                                       transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) \n
\n", "system": "" }, { "instruction": "Using Pytorch's Multiprocessing along with Distributed", "input": "", "output": "

Sometimes our questions become too restrictive due to premature optimization, like the choice of MPI backend in this case... it may be actually impossible, given that the popular distributed training framework Ray, which supports the other two backends, NCCL and Gloo, does not support MPI, see its code:

\n

RuntimeError for Backend.MPI

\n

An example of using Ray for distributed training of PyTorch models with backends other than MPI (source):

\n
import pytorch_lightning as pl\nfrom ray_lightning import RayPlugin\n\n# Create your PyTorch Lightning model here.\nptl_model = MNISTClassifier(...)\nplugin = RayPlugin(num_workers=4, num_cpus_per_worker=1, use_gpu=True)\n\n# If using GPUs, set the ``gpus`` arg to a value > 0.\n# The actual number of GPUs is determined by ``num_workers``.\ntrainer = pl.Trainer(..., gpus=1, plugins=[plugin])\ntrainer.fit(ptl_model)\n
\n", "system": "" }, { "instruction": "Torch.cuda.is_available() keeps switching to False", "input": "", "output": "

The reason for torch.cuda.is_available() resulting False is the incompatibility between the versions of pytorch and cudatoolkit.

\n

As on Jun-2022, the current version of pytorch is compatible with cudatoolkit=11.3 whereas the current cuda toolkit version = 11.7. Source

\n

Solution:

\n
    \n
  1. Uninstall Pytorch for a fresh installation. You cannot install an old version on top of a new version without force installation (using pip install --upgrade --force-reinstall <package_name>.

    \n
  2. \n
  3. Run conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch to install pytorch.

    \n
  4. \n
  5. Install CUDA 11.3 version from https://developer.nvidia.com/cuda-11.3.0-download-archive.

    \n
  6. \n
\n

You are good to go.\n\"torch.cuda.is_available()

\n", "system": "" }, { "instruction": "pytorch grad is None after .backward()", "input": "", "output": "

This is the expected result.

\n\n

.backward accumulate gradient only in the leaf nodes. out is not a leaf node, hence grad is None.

\n\n

autograd.backward also does the same thing

\n\n

autograd.grad can be used to find the gradient of any tensor w.r.t to any tensor. So if you do autograd.grad (out, out) you get (tensor(1.),) as output which is as expected.

\n\n

Ref:

\n\n\n", "system": "" }, { "instruction": "How to multiply a tensor row-wise by a vector in PyTorch?", "input": "", "output": "

You need to add a corresponding singleton dimension:

\n
m * s[:, None]\n
\n

s[:, None] has size of (12, 1) when multiplying a (12, 10) tensor by a (12, 1) tensor pytorch knows to broadcast s along the second singleton dimension and perform the "element-wise" product correctly.

\n", "system": "" }, { "instruction": "Pytorch: Image label", "input": "", "output": "

The class ImageFolder has an attribute class_to_idx which is a dictionary mapping the name of the class to the index (label). So, you can access the classes with data.classes and for each class get the label with data.class_to_idx.

\n\n

For reference: https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py

\n", "system": "" }, { "instruction": "How to include batch size in pytorch basic example?", "input": "", "output": "

To include batch size in PyTorch basic examples, the easiest and cleanest way is to use PyTorch torch.utils.data.DataLoader and torch.utils.data.TensorDataset.

\n
\n

Dataset stores the samples and their corresponding labels, and DataLoader wraps an iterable around the Dataset to enable easy access to the samples.

\n
\n

DataLoader will take care of creating batches for you.

\n

Building on your question, there is a complete code snippet, where we iterate over a dataset of 10000 examples for 2 epochs with a batch size of 64:

\n
import torch\nfrom torch.utils.data import DataLoader, TensorDataset\n\n\n# Create the dataset with N_SAMPLES samples\nN_SAMPLES, D_in, H, D_out = 10000, 1000, 100, 10\n\nx = torch.randn(N_SAMPLES, D_in)\ny = torch.randn(N_SAMPLES, D_out)\n\n# Define the batch size and the number of epochs\nBATCH_SIZE = 64\nN_EPOCHS = 2\n\n# Use torch.utils.data to create a DataLoader \n# that will take care of creating batches \ndataset = TensorDataset(x, y)\ndataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True)\n\n# Define model, loss and optimizer\nmodel = torch.nn.Sequential(\n    torch.nn.Linear(D_in, H),\n    torch.nn.ReLU(),\n    torch.nn.Linear(H, D_out),\n)\n\nloss_fn = torch.nn.MSELoss(reduction='sum')\noptimizer = torch.optim.Adam(model.parameters(), lr=1e-4)\n\n# Get the dataset size for printing (it is equal to N_SAMPLES)\ndataset_size = len(dataloader.dataset)\n\n# Loop over epochs\nfor epoch in range(N_EPOCHS):\n    print(f"Epoch {epoch + 1}\\n-------------------------------")\n\n    # Loop over batches in an epoch using DataLoader\n    for id_batch, (x_batch, y_batch) in enumerate(dataloader):\n\n        y_batch_pred = model(x_batch)\n\n        loss = loss_fn(y_batch_pred, y_batch)\n\n        optimizer.zero_grad()\n        loss.backward()\n        optimizer.step()\n\n        # Every 100 batches, print the loss for this batch\n        # as well as the number of examples processed so far \n        if id_batch % 100 == 0:\n            loss, current = loss.item(), (id_batch + 1)* len(x_batch)\n            print(f"loss: {loss:>7f}  [{current:>5d}/{dataset_size:>5d}]")\n
\n

The output should be something like:

\n
Epoch 1\n-------------------------------\nloss: 643.433716  [   64/10000]\nloss: 648.195435  [ 6464/10000]\nEpoch 2\n-------------------------------\nloss: 613.619873  [   64/10000]\nloss: 625.018555  [ 6464/10000]\n
\n", "system": "" }, { "instruction": "How does pytorch broadcasting work?", "input": "", "output": "

PyTorch broadcasting is based on numpy broadcasting semantics which can be understood by reading numpy broadcasting rules or PyTorch broadcasting guide. Expounding the concept with an example would be intuitive to understand it better. So, please see the example below:

\n
In [27]: t_rand\nOut[27]: tensor([ 0.23451,  0.34562,  0.45673])\n\nIn [28]: t_ones\nOut[28]: \ntensor([[ 1.],\n        [ 1.],\n        [ 1.],\n        [ 1.]])\n
\n

Now for torch.add(t_rand, t_ones), visualize it like:

\n
               # shape of (3,)\n               tensor([ 0.23451,      0.34562,       0.45673])\n      # (4, 1)          | | | |       | | | |        | | | |\n      tensor([[ 1.],____+ | | |   ____+ | | |    ____+ | | |\n              [ 1.],______+ | |   ______+ | |    ______+ | |\n              [ 1.],________+ |   ________+ |    ________+ |\n              [ 1.]])_________+   __________+    __________+\n
\n

which should give the output with tensor of shape (4,3) as:

\n
# shape of (4,3)\nIn [33]: torch.add(t_rand, t_ones)\nOut[33]: \ntensor([[ 1.23451,  1.34562,  1.45673],\n        [ 1.23451,  1.34562,  1.45673],\n        [ 1.23451,  1.34562,  1.45673],\n        [ 1.23451,  1.34562,  1.45673]])\n
\n
\n

Also, note that we get exactly the same result even if we pass the arguments in a reverse order as compared to the previous one:

\n
# shape of (4, 3)\nIn [34]: torch.add(t_ones, t_rand)\nOut[34]: \ntensor([[ 1.23451,  1.34562,  1.45673],\n        [ 1.23451,  1.34562,  1.45673],\n        [ 1.23451,  1.34562,  1.45673],\n        [ 1.23451,  1.34562,  1.45673]])\n
\n

Anyway, I prefer the former way of understanding for more straightforward intuitiveness.

\n
\n

For pictorial understanding, I culled out more examples which are enumerated below:

\n

Example-1:

\n

\"broadcasting-1\"

\n
\n

Example-2::

\n

\"theano

\n

T and F stand for True and False respectively and indicate along which dimensions we allow broadcasting (source: Theano).

\n
\n

Example-3:

\n

Here are some shapes where the array b is broadcasted appropriately to attempt to match the shape of the array a.

\n

\"broadcastable

\n

As shown above, the broadcasted b may still not match the shape of a, and so the operation a + b will fail whenever the final broadcasted shapes do not match.

\n", "system": "" }, { "instruction": "PyTorch: Testing with torchvision.datasets.ImageFolder and DataLoader", "input": "", "output": "

Looking at the data from Kaggle and your code, it seems that there are problems in your data loading, both train and test set. First of all, the data should be in a different folder per label for the default PyTorch ImageFolder to load it correctly. In your case, since all the training data is in the same folder, PyTorch is loading it as one class and hence learning seems to be working. You can correct this by using a folder structure like - train/dog, - train/cat, - test/dog, - test/cat and then passing the train and the test folder to the train and test ImageFolder respectively. The training code seems fine, just change the folder structure and you should be good. Take a look at the official documentation of ImageFolder which has a similar example.

\n", "system": "" }, { "instruction": "Creating one hot vector from indices given as a tensor", "input": "", "output": "

NEW ANSWER\nAs of PyTorch 1.1, there is a one_hot function in torch.nn.functional. Given any tensor of indices indices and a maximal index n, you can create a one_hot version as follows:

\n\n
n = 5\nindices = torch.randint(0,n, size=(4,7))\none_hot = torch.nn.functional.one_hot(indices, n) # size=(4,7,n)\n
\n\n

Very old Answer

\n\n

At the moment, slicing and indexing can be a bit of a pain in PyTorch from my experience. I assume you don't want to convert your tensors to numpy arrays. The most elegant way I can think of at the moment is to use sparse tensors and then convert to a dense tensor. That would work as follows:

\n\n
from torch.sparse import FloatTensor as STensor\n\nbatch_size = 4\nseq_length = 6\nfeat_dim = 16\n\nbatch_idx = torch.LongTensor([i for i in range(batch_size) for s in range(seq_length)])\nseq_idx = torch.LongTensor(list(range(seq_length))*batch_size)\nfeat_idx = torch.LongTensor([[5, 3, 2, 11, 15, 15], [1, 4, 6, 7, 3, 3],                            \n                             [2, 4, 7, 8, 9, 10], [11, 12, 15, 2, 5, 7]]).view(24,)\n\nmy_stack = torch.stack([batch_idx, seq_idx, feat_idx]) # indices must be nDim * nEntries\nmy_final_array = STensor(my_stack, torch.ones(batch_size * seq_length), \n                         torch.Size([batch_size, seq_length, feat_dim])).to_dense()    \n\nprint(my_final_array)\n
\n\n

Note: PyTorch is undergoing some work currently, that will add numpy style broadcasting and other functionalities within the next two or three weeks and other functionalities. So it's possible, there'll be better solutions available in the near future.

\n\n

Hope this helps you a bit.

\n", "system": "" }, { "instruction": "Huggingface: How do I find the max length of a model?", "input": "", "output": "

Perhaps late, but if you haven't found a solution, I think you can use the tokenizer of that model.\nFor instance:

\n
>>> MODEL = "google/flan-t5-xl"\n>>> tokenizer = AutoTokenizer.from_pretrained(MODEL)\n>>> tokenizer.model_max_length\n512\n\n>>> MODEL = "facebook/bart-base"\n>>> tokenizer = AutoTokenizer.from_pretrained(MODEL)\n>>> tokenizer.model_max_length\n1024\n
\n", "system": "" }, { "instruction": "PyTorch `torch.no_grad` vs `torch.inference_mode`", "input": "", "output": "

Yes, torch.inference_mode is indeed preferable to torch.no_grad in all situations where inference mode does not throw a runtime error. Check here.

\n", "system": "" }, { "instruction": "How to parallelize a training loop ever samples of a batch when CPU is only available in pytorch?", "input": "", "output": "

Torch will use multiple CPU to parallelize operations, so your serial is maybe using multi-core vectorization.

\n

Take this simple example

\n
import torch\nc = 0;\nfor i in range(10000):\n    A = torch.randn(1000, 1000, device='cpu');\n    B = torch.randn(1000, 1000, device='cpu');\n    c += torch.sum(A @ B)\n
\n

No code is used to parallelize, however 80% of 12 CPUs with the default configuration.

\n

\"enter

\n

You can use torch.set_num_threads to set intraop parallelism on CPU. In particular if you are running multiple process and you want each process to use a single CPU you may want to set in each process the intraop parallelism to 1.

\n

However, parallelizing the operations has a cost, I am unable go into the implementation details but we can run a quick experiment that shows the overhead of using multiple threads.

\n
import matplotlib.pyplot as plt\nimport numpy as np\nimport torch;\nimport time;\nA = torch.randn(1000, 1000, device='cpu');\nB = torch.randn(1000, 1000, device='cpu');\nfuncs = {\n    'sin': lambda a,b: torch.sin(A),\n    'tanh': lambda a,b: torch.tanh(A),\n    'log': lambda a,b: torch.log(A),\n    'matmul': lambda a,b: A @ B.T\n}\nt = np.zeros(20)\nfor k,f in funcs.items():\n    for i in range(1, len(t) + 1):\n        torch.set_num_threads(i)\n        c = 0;\n        t0 = time.time();\n        for _ in range(100):\n            f(A,B)\n        tf = time.time()\n        t[i-1] = (tf - t0)*i;\n    plt.plot(np.arange(1, len(t)+1), t, '-o', label=k)\nplt.xlabel('Number of threads')\nplt.legend()\nplt.ylabel('Core x time')\n
\n

The operations tends to run faster with parallelism\n\"enter

\n

But if we take the total CPU time, by multiplying by the number of threads, we see that the single thread version is more efficient.

\n

\"enter

\n

If you are able to parallelize your experiment at a higher level, by running independent processes, you should try that with a single core for each process, otherwise each process will try to use all the CPUs and all of them will run very slowly because your system is overloaded.

\n

Tweaking DDP example

\n

I modified hyperparameters of your example scripts intentionally in a way that weights in favor of multi process.

\n\n
"""\nBased on: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html\n\nNote: as opposed to the multiprocessing (torch.multiprocessing) package, processes can use\ndifferent communication backends and are not restricted to being executed on the same machine.\n"""\nimport torch\nfrom torch import nn, optim\nimport torch.distributed as dist\nimport torch.multiprocessing as mp\nfrom torch.nn.parallel import DistributedDataParallel as DDP\nimport argparse\nimport os\n\n# More than one epoch so that the initialization is less significant\n# than compared to the model processing time\nnum_epochs = 10\n# for the experiment select a number that has a lot of divisors\n# as I want to test with equal number of batches\nnum_batches = 16*9*5\n# Uses a larger batch so that more work is done in each process\n# between two gradient synchronizations\n# apparently the intraop optimization is not helping \n# (at least not too much) in the batch dimension\nbatch_size = 10000\n# Use smaller dimensions, so that the intraop parallelization becomes less \n# helpful\nDin, Dout = 3, 5\ndata_x = torch.randn(batch_size, Din)\ndata_y = torch.randn(batch_size, Dout)\ndata = [(i*data_x, i*data_y) for i in range(num_batches)]\n\nclass OneDeviceModel(nn.Module):\n    """\n    Toy example for a model ran in parallel but not distributed accross gpus\n    (only processes with their own gpu or hardware)\n    """\n    def __init__(self):\n        super().__init__()\n        # -- Use more layers\n        self.net = [nn.Linear(Din, Din) for _ in range(10)]\n        # -- Bob: use more complex activation  \n        self.tanh = nn.Tanh()\n        self.sigmoid = nn.Sigmoid()\n        self.relu = nn.ReLU()\n        self.net2 = nn.Linear(Din, Dout)\n\n    def forward(self, x):\n      # apply the 10 layers sequentially\n      for i in range(10):\n        x = self.net[i](x)\n        x = self.sigmoid(x)\n        x = self.tanh(x)\n        x = self.relu(x)\n      return self.net2(x)\n\ndef setup_process(rank, world_size, backend='gloo'):\n    """\n    Initialize the distributed environment (for each process).\n\n    gloo: is a collective communications library (https://github.com/facebookincubator/gloo). My understanding is that\n    it's a library/API for process to communicate/coordinate with each other/master. It's a backend library.\n    """\n    # set up the master's ip address so this child process can coordinate\n    # os.environ['MASTER_ADDR'] = '127.0.0.1'\n    os.environ['MASTER_ADDR'] = 'localhost'\n    os.environ['MASTER_PORT'] = '12355'\n\n    # - use NCCL if you are using gpus: https://pytorch.org/tutorials/intermediate/dist_tuto.html#communication-backends\n    if torch.cuda.is_available():\n        backend = 'nccl'\n    # Initializes the default distributed process group, and this will also initialize the distributed package.\n    dist.init_process_group(backend, rank=rank, world_size=world_size)\n\ndef cleanup():\n    """ Destroy a given process group, and deinitialize the distributed package """\n    dist.destroy_process_group()\n\ndef run_parallel_training_loop(rank, world_size):\n    """\n    Distributed function to be implemented later.\n\n    This is the function that is actually ran in each distributed process.\n\n    Note: as DDP broadcasts model states from rank 0 process to all other processes in the DDP constructor,\n    you don\u2019t need to worry about different DDP processes start from different model parameter initial values.\n    """\n    print()\n    print(f"Start running DDP with model parallel example on rank: {rank}.")\n    print(f'current process: {mp.current_process()}')\n    print(f'pid: {os.getpid()}')\n    setup_process(rank, world_size)\n    torch.set_num_threads(mp.cpu_count() // world_size)\n    # create model and move it to GPU with id rank\n    model = OneDeviceModel().to(rank) if torch.cuda.is_available() else OneDeviceModel().share_memory()\n    # ddp_model = DDP(model, device_ids=[rank])\n    ddp_model = DDP(model)\n    for _ in range(num_epochs):\n      for batch_idx, batch in enumerate(data[rank::world_size]):\n          x, y = batch\n          loss_fn = nn.MSELoss()\n          optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)\n\n          optimizer.zero_grad()\n          outputs = ddp_model(x)\n          labels = y.to(rank) if torch.cuda.is_available() else y\n          # Gradient synchronization communications take place during the backward pass and overlap with the backward computation.\n          loss_fn(outputs, labels).backward()  # When the backward() returns, param.grad already contains the synchronized gradient tensor.\n          optimizer.step()  # TODO how does the optimizer know to do the gradient step only once?\n\n    print()\n    print(f"Start running DDP with model parallel example on rank: {rank}.")\n    print(f'current process: {mp.current_process()}')\n    print(f'pid: {os.getpid()}')\n    # Destroy a given process group, and deinitialize the distributed package\n    cleanup()\n\ndef main():\n    print()\n    print('running main()')\n    print(f'current process: {mp.current_process()}')\n    print(f'pid: {os.getpid()}')\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--world-size', default=1, type=int)\n    args = parser.parse_args()\n    assert num_batches % args.world_size == 0\n    mp.spawn(run_parallel_training_loop, args=(args.world_size,), nprocs=args.world_size)\n\nif __name__ == "__main__":\n    print('starting __main__')\n    main()\n    print('Done!\\a\\n')\n
\n
$ time python3 ddp.py --world-size 1 > /dev/null\n\nreal    0m59.092s\nuser    8m46.589s\nsys     0m7.320s\n\n$ time python3 ddp.py --world-size 1 > /dev/null\n\nreal    1m11.124s\nuser    10m54.209s\nsys     0m9.595s\n\n$ time python3 ddp.py --world-size 6 > /dev/null\n\nreal    0m18.348s\nuser    2m28.799s\nsys     0m18.068s\n$ time python3 ddp.py --world-size 12 > /dev/null\n\nreal    0m26.352s\nuser    4m3.074s\nsys     0m39.179s\n$ time python3 ddp.py --world-size 3 > /dev/null\n\nreal    0m23.047s\nuser    3m51.172s\nsys     0m11.483s\n$ time python3 ddp.py --world-size 4 > /dev/null\n\nreal    0m18.195s\nuser    2m55.241s\nsys     0m12.841s\n$ time python3 ddp.py --world-size 2 > /dev/null\n\nreal    0m26.955s\nuser    4m15.837s\nsys     0m7.127s\n
\n

If I remove the line

\n
torch.set_num_threads(mp.cpu_count() // world_size)\n
\n
$ time python3 ddp.py --world-size 4 > /dev/null\n\nreal    0m40.574s\nuser    6m39.176s\nsys     0m19.025s\n\n$ time python3 ddp.py --world-size 2 > /dev/null\n\nreal    0m28.066s\nuser    3m17.775s\nsys     0m8.410s\n\n$ time python3 ddp.py --world-size 1 > /dev/null\n\nreal    0m37.114s\nuser    2m19.743s\nsys     0m4.866s\n
\n

Using

\n
torch.set_num_threads(mp.cpu_count() // world_size // 2)\n
\n
$ time python3 ddp.py --world-size 6 > /dev/null\n\nreal    0m16.399s\nuser    1m38.915s\nsys     0m20.780s\n\n$ time python3 ddp.py --world-size 4 > /dev/null\n\nreal    0m15.649s\nuser    1m1.821s\nsys     0m13.589s\n\n$ time python3 ddp.py --world-size 3 > /dev/null\n\nreal    0m16.947s\nuser    1m29.696s\nsys     0m10.069s\n\n$ time python3 ddp.py --world-size 2 > /dev/null\n\nreal    0m21.851s\nuser    2m4.564s\nsys     0m7.486s\n
\n

My Opinion

\n

DDP in a single node seems not particularly advantageous. Unless you have a model that does a lot of work that is particularly not well handled by pytorch intraop parallelism, have large batches, and preferrably models with less parameters and more operations, meaning less gradients to synchronize, e.g. a convolutional model on a very large input.

\n

Other scenarios where DDP might be helpful is if you are using too much python in your model, instead of vectorized operations.

\n", "system": "" }, { "instruction": "Correct way of normalizing and scaling the MNIST dataset", "input": "", "output": "

Euler_Salter

\n

I may have stumbled upon this a little too late, but hopefully I can help a little bit.

\n

Assuming that you are using torchvision.Transform, the following code can be used to normalize the MNIST dataset.

\n
        train_loader = torch.utils.data.DataLoader(\n        datasets.MNIST('./data', train=True\n        transform=transforms.Compose([\n        transforms.ToTensor(),\n        transforms.Normalize((0.1307,), (0.3081,))\n        ])),\n
\n

Usually, 'transforms.ToTensor()' is used to turn the input data in the range of [0,255] to a 3-dimensional Tensor. This function automatically scales the input data to the range of [0,1]. (This is equivalent to scaling the data down to 0,1)

\n

Therefore, it makes sense that the mean and std used in the 'transforms.Normalize(...)' will be 0.1307 and 0.3081, respectively. (This is equivalent to normalizing zero mean and unit standard deviation.)

\n

Please refer to the link below for better explanation.

\n

https://pytorch.org/vision/stable/transforms.html

\n", "system": "" }, { "instruction": "Is One-Hot Encoding required for using PyTorch's Cross Entropy Loss Function?", "input": "", "output": "

nn.CrossEntropyLoss expects integer labels. What it does internally is that it doesn't end up one-hot encoding the class label at all, but uses the label to index into the output probability vector to calculate the loss should you decide to use this class as the final label. This small but important detail makes computing the loss easier and is the equivalent operation to performing one-hot encoding, measuring the output loss per output neuron as every value in the output layer would be zero with the exception of the neuron indexed at the target class. Therefore, there's no need to one-hot encode your data if you have the labels already provided.

\n

The documentation has some more insight on this: https://pytorch.org/docs/master/generated/torch.nn.CrossEntropyLoss.html. In the documentation you'll see targets which serves as part of the input parameters. These are your labels and they are described as:

\n

\"Targets\"

\n

This clearly shows how the input should be shaped and what is expected. If you in fact wanted to one-hot encode your data, you would need to use torch.nn.functional.one_hot. To best replicate what the cross entropy loss is doing under the hood, you'd also need nn.functional.log_softmax as the final output and you'd have to additionally write your own loss layer since none of the PyTorch layers use log softmax inputs and one-hot encoded targets. However, nn.CrossEntropyLoss combines both of these operations together and is preferred if your outputs are simply class labels so there is no need to do the conversion.

\n", "system": "" }, { "instruction": "Parallelization strategies for deep learning", "input": "", "output": "

Training

\n

In general, there are two strategies of parallelizing model training: data parallelism and model parallelism.

\n

1. Data parallelism

\n

This strategy splits training data into N partitions, each of which will be trained on different \u201cdevices\u201d (different CPU cores, GPUs, or even machines). In contrast to training without data parallelism which produces one gradient per minibatch, we now have N gradients for each minibatch step. The next question is how we should combine these N gradients.

\n

One way to do it is by averaging all the N gradients and then updating the model parameters once based on the average. This technique is called synchronous distributed SGD. By doing the average, we have a more accurate gradient, but with a cost of waiting all the devices to finish computing its own local gradient.

\n

Another way is by not combining the gradients \u2014 each gradient will instead be used to update the model parameters independently. So, there will be N parameter updates for each minibatch step, in contrast to only one for the previous technique. This technique is called asynchronous distributed SGD. Because it doesn't have to wait other devices to finish, the async approach will take less time to complete a minibatch step than the sync approach will do. However, the async approach will produce a more noisy gradient, so it might need to complete more minibatch steps to catch up with the performance (in terms of loss) of the sync approach.

\n

There are many papers proposing some improvements and optimizations on either approach, but the main idea is generally the same as described above.

\n

In the literature there's been some disagreement on which technique is better in practice. At the end most people now settle on the synchronous approach.

\n

Data Parallelism in PyTorch

\n

To do synchronous SGD, we can wrap our model with torch.nn.parallel.DistributedDataParallel:

\n
from torch.nn.parallel import DistributedDataParallel as DDP\n\n# `model` is the model we previously initialized\nmodel = ...\n\n# `rank` is a device number starting from 0\nmodel = model.to(rank)\nddp_model = DDP(model, device_ids=[rank])\n
\n

Then we can train it similarly. For more details, you can refer to the official tutorial.

\n

For doing asynchronous SGD in PyTorch, we need to implement it more manually since there is no wrapper similar to DistributedDataParallel for it.

\n

Data Parallelism in TensorFlow/Keras

\n

For synchronous SGD, we can use tf.distribute.MirroredStrategy to wrap the model initalization:

\n
import tensorflow as tf\n\nstrategy = tf.distribute.MirroredStrategy()\nwith strategy.scope():\n    model = Model(...)\n    model.compile(...)\n
\n

Then we can train it as usual. For more details, you can refer to the official guides on Keras website and TensorFlow website.

\n

For asynchronous SGD, we can use tf.distribute.experimental.ParameterServerStrategy similarly.

\n

2. Model Parallelism

\n

This strategy splits the model into N parts, each of which will be computed on different devices. A common way to split the model is based on layers: different sets of layers are placed on different devices. But we can also split it more intricately depending on the model architecture.

\n

Model Parallelism in TensorFlow and PyTorch

\n

To implement model parallelism in either TensorFlow or PyTorch, the idea is the same: to move some model parameters into a different device.

\n

In PyTorch we can use torch.nn.Module.to method to move a module into a different device. For example, suppose we want to create two linear layers each of which is placed on a different GPU:

\n
import torch.nn as nn\n\nlinear1 = nn.Linear(16, 8).to('cuda:0')\nlinear2 = nn.Linear(8, 4).to('cuda:1')\n
\n

In TensorFlow we can use tf.device to place an operation into a specific device. To implement the PyTorch example above in TensorFlow:

\n
import tensorflow as tf\nfrom tensorflow.keras import layers\n\nwith tf.device('/GPU:0'):\n    linear1 = layers.Dense(8, input_dim=16)\nwith tf.device('/GPU:1'):\n    linear2 = layers.Dense(4, input_dim=8)\n
\n

For more details you can refer to the official PyTorch tutorial; or if you use TensorFlow you can even use a more high-level library like mesh.

\n

3. Hybrid: Data and Model Parallelism

\n

Recall that data parallelism only splits the training data, whereas model parallelism only splits the model structures. If we have a model so large that even after using either parallelism strategy it still doesn't fit in the memory, we can always do both.

\n

In practice most people prefer data parallelism to model parallelism since the former is more decoupled (in fact, independent) from the model architecture than the latter. That is, by using data parallelism they can change the model architecture as they like, without worrying which part of the model should be parallelized.

\n

Model Inference / Serving

\n

Parallelizing model serving is easier than parallelizing model training since the model parameters are already fixed and each request can be processed independently. Similar to scaling a regular Python web service, we can scale model serving by spawning more processes (to workaround Python's GIL) in a single machine, or even spawning more machine instances.

\n

When we use a GPU to serve the model, though, we need to do more work to scale it. Because of how concurrency is handled differently by a GPU compared to a CPU, in order to maximize the performance, we need to do inference request batching. The idea is when a request comes, instead of immediately processing it, we wait some timeout duration for other requests to come. When the timeout is up, even if the number of requests is only one, we batch them all to be processed on the GPU.

\n

In order to minimize the average request latency, we need to find the optimal timeout duration. To find it we need to observe that there is a trade-off between minimizing the timeout duration and maximizing the number of batch size. If the timeout is too low, the batch size will be small, so the GPU will be underutilized. But if the timeout is too high, the requests that come early will wait too long before they get processed. So, the optimal timeout duration depends on the model complexity (hence, the inference duration) and the average requests per second to receive.

\n

Implementing a scheduler to do request batching is not a trivial task, so instead of doing it manually, we'd better use TensorFlow Serving or PyTorch Serve which already supports it.

\n
\n

To learn more about parallel and distributed learning, you can read this review paper.

\n", "system": "" }, { "instruction": "net.zero_grad() vs optim.zero_grad() pytorch", "input": "", "output": "

net.zero_grad() sets the gradients of all its parameters (including parameters of submodules) to zero. If you call optim.zero_grad() that will do the same, but for all parameters that have been specified to be optimised. If you are using only net.parameters() in your optimiser, e.g. optim = Adam(net.parameters(), lr=1e-3), then both are equivalent, since they contain the exact same parameters.

\n\n

You could have other parameters that are being optimised by the same optimiser, which are not part of net, in which case you would either have to manually set their gradients to zero and therefore keep track of all the parameters, or you can simply call optim.zero_grad() to ensure that all parameters that are being optimised, had their gradients set to zero.

\n\n
\n

Moreover, what happens if I do both?

\n
\n\n

Nothing, the gradients would just be set to zero again, but since they were already zero, it makes absolutely no difference.

\n\n
\n

If I do none, then the gradients get accumulated, but what does that exactly mean? do they get added?

\n
\n\n

Yes, they are being added to the existing gradients. In the backward pass the gradients in respect to every parameter are calculated, and then the gradient is added to the parameters' gradient (param.grad). That allows you to have multiple backward passes, that affect the same parameters, which would not be possible if the gradients were overwritten instead of being added.

\n\n

For example, you could accumulate the gradients over multiple batches, if you need bigger batches for training stability but don't have enough memory to increase the batch size. This is trivial to achieve in PyTorch, which is essentially leaving off optim.zero_grad() and delaying optim.step() until you have gathered enough steps, as shown in HuggingFace - Training Neural Nets on Larger Batches: Practical Tips for 1-GPU, Multi-GPU & Distributed setups.

\n\n

That flexibility comes at the cost of having to manually set the gradients to zero. Frankly, one line is a very small cost to pay, even though many users won't make use of it and especially beginners might find it confusing.

\n", "system": "" }, { "instruction": "PyTorch RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn", "input": "", "output": "

Following worked for me:

\n
loss.requires_grad = True\nloss.backward()\n
\n", "system": "" }, { "instruction": "BertForSequenceClassification vs. BertForMultipleChoice for sentence multi-class classification", "input": "", "output": "

The answer to this lies in the (admittedly very brief) description of what the tasks are about:

\n\n
\n

[BertForMultipleChoice] [...], e.g. for RocStories/SWAG tasks.

\n
\n\n

When looking at the paper for SWAG, it seems that the task is actually learning to choose from varying options. This is in contrast to your \"classical\" classification task, in which the \"choices\" (i.e., classes) do not vary across your samples, which is exactly what BertForSequenceClassification is for.

\n\n

Both variants can in fact be for an arbitrary number of classes (in the case of BertForSequenceClassification), respectively choices (for BertForMultipleChoice), via changing the labels parameter in the config. But, since it seems like you are dealing with a case of \"classical classification\", I suggest using the BertForSequenceClassification model.

\n\n

Shortly addressing the missing Softmax in BertForSequenceClassification: Since classification tasks can compute loss across classes indipendent of the sample (unlike multiple choice, where your distribution is changing), this allows you to use Cross-Entropy Loss, which factors in Softmax in the backpropagation step for increased numerical stability.

\n", "system": "" }, { "instruction": "How to use autograd.gradcheck in PyTorch?", "input": "", "output": "

There's an example use case provided in the documentation here:

\n\n

https://pytorch.org/docs/master/notes/extending.html

\n\n
\n

You probably want to check if the backward method you implemented\n actually computes the derivatives of your function. It is possible by\n comparing with numerical approximations using small finite\n differences:

\n \n
from torch.autograd import gradcheck\n\n# gradcheck takes a tuple of tensors as input, check if your gradient\n# evaluated with these tensors are close enough to numerical\n# approximations and returns True if they all verify this condition.\ninput = (torch.randn(20,20,dtype=torch.double,requires_grad=True), torch.randn(30,20,dtype=torch.double,requires_grad=True))\ntest = gradcheck(linear, input, eps=1e-6, atol=1e-4)\nprint(test)\n
\n
\n\n

As the quote above suggests, the purpose of the gradcheck function is to verify that a custom backward function agrees with a numerical approximation of the gradient. The primary use case is when you're implementing a custom backward operation. In very few cases should you be implementing your own backward function in PyTorch. This is because PyTorch's autograd functionality takes care of computing gradients for the vast majority of operations.

\n\n

The most obvious exceptions are

\n\n
    \n
  1. You have a function which can't be expressed as a finite combination of other differentiable functions (for example, if you needed the incomplete gamma function, you might want to write your own forward and backward which used numpy and/or lookup tables).

  2. \n
  3. You're looking to speed up the computation of a particularly complicated expression for which the gradient could be drastically simplified after applying the chain rule.

  4. \n
\n", "system": "" }, { "instruction": "I can't load my model because I can't put a PosixPath", "input": "", "output": "

Just redirect PosixPath to WindowsPath.

\n
import pathlib\ntemp = pathlib.PosixPath\npathlib.PosixPath = pathlib.WindowsPath\n
\n

I am also loading fastai models and this trick works.

\n

IMPORTANT: Since this might cause issues later, remember to set pathlib.PosixPath = temp when done.

\n", "system": "" }, { "instruction": "How to solve the run time error "Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment"", "input": "", "output": "

I met a similar problem once. To be brief, the mistake is caused by deepcopy, which is not suitable for non-leaf node. You can print the Rs, R1 and C1 to check whether they are leaf node.

\n\n

If they are leaf node, there is \"requires_grad=True\" and is not \"grad_fn=SliceBackward\" or \"grad_fn=CopySlices\". I guess that non-leaf node has grad_fn, which is used to propagate gradients.

\n\n
#---------------------------------------------------------------------------------\n>>>import torch\n>>>q = torch.nn.Parameter(torch.Tensor(3,3))\n>>>q\nParameter containing:\ntensor([[8.7551e-37, 0.0000e+00, 0.0000e+00],\n        [0.0000e+00, 0.0000e+00, 0.0000e+00],\n        [0.0000e+00, 0.0000e+00, 0.0000e+00]], requires_grad=True)\n#q is leaf node\n>>>p = q[0,:]\n>>>p\ntensor([8.7551e-37, 0.0000e+00, 0.0000e+00], grad_fn=<SliceBackward>)\n#p is non-leaf node\n>>>q[0,0] = 0\n>>>q\nParameter containing:\ntensor([[0., 0., 0.],\n        [0., 0., 0.],\n        [0., 0., 0.]], grad_fn=<CopySlices>)\n#if slice operation is made on q, q becomes non-leaf node. The deepcopy is not suitable for q any more.\n#-----------------------------------------------------------------------------\n
\n", "system": "" }, { "instruction": "Reproducibility and performance in PyTorch", "input": "", "output": "

Performance refers to the run time; CuDNN has several ways of implementations, when cudnn.deterministic is set to true, you're telling CuDNN that you only need the deterministic implementations (or what we believe they are). In a nutshell, when you are doing this, you should expect the same results on the CPU or the GPU on the same system when feeding the same inputs. Why would it affect the performance? CuDNN uses heuristics for the choice of the implementation. So, it actually depends on your model how CuDNN will behave; choosing it to be deterministic may affect the runtime because their could have been, let's say, faster way of choosing them at the same point of running.

\n\n
\n\n

Concerning your snippet, I do the exact seeding, it has been working good (in terms of reproducibility) for 100+ DL experiments.

\n", "system": "" }, { "instruction": "How does one dynamically add new parameters to optimizers in Pytorch?", "input": "", "output": "

Just to add an answer to the title of your question: \"How does one dynamically add new parameters to optimizers in Pytorch?\"

\n\n

You can append params at any time to the optimizer:

\n\n
import torch\nimport torch.optim as optim\n\nmodel = torch.nn.Linear(2, 2) \n\n# Initialize optimizer\noptimizer = optim.Adam(model.parameters(), lr=0.001, momentum=0.9)\n\nextra_params = torch.randn(2, 2)\noptimizer.param_groups.append({'params': extra_params })\n\n#then you can print your `extra_params`\nprint(\"extra params\", extra_params)\nprint(\"optimizer params\", optimizer.param_groups)\n
\n", "system": "" }, { "instruction": "Why there are different output between model.forward(input) and model(input)", "input": "", "output": "

model.forward just calls the forward operations as you mention but __call__ does a little extra.

\n\n

If you dig into the code of nn.Module class you will see __call__ ultimately calls forward but internally handles the forward or backward hooks and manages some states that pytorch allows. When calling a simple model like just an MLP, it may not be really needed but more complex model like spectral normalization layers have hooks and therefore you should use model(.) signature as much as possible unless you explicitly just want to call model.forward

\n\n

Also see Calling forward function without .forward()

\n\n

In this case, however, the difference may be due to some dropout layer, you should call vgg.eval() to make sure all the stochasticity in network is turned off before comparing the outputs.

\n", "system": "" }, { "instruction": "Output and Broadcast shape mismatch in MNIST, torchvision", "input": "", "output": "

The error is due to color vs grayscale on the dataset, the dataset is grayscale.

\n\n

I fixed it by changing transform to

\n\n
transform = transforms.Compose([transforms.ToTensor(),\n  transforms.Normalize((0.5,), (0.5,))\n])\n
\n", "system": "" }, { "instruction": "How to batch convert sentence lengths to masks in PyTorch?", "input": "", "output": "

One way that I found is:

\n\n
torch.arange(max_len).expand(len(lens), max_len) < lens.unsqueeze(1)\n
\n\n

Please share if there are better ways!

\n", "system": "" }, { "instruction": "What does the underscore suffix in PyTorch functions mean?", "input": "", "output": "

You have already answered your own question that the underscore indicates in-place operations in PyTorch. However I want to point out briefly why in-place operations can be problematic:

\n\n\n\n

Here is a shot and slightly modified example taken from the answer you've posted:

\n\n

First the in-place version:

\n\n
import torch\na = torch.tensor([2, 4, 6], requires_grad=True, dtype=torch.float)\nadding_tensor = torch.rand(3)\nb = a.add_(adding_tensor)\nc = torch.sum(b)\nc.backward()\nprint(c.grad_fn)\n
\n\n

Which leads to this error:

\n\n
---------------------------------------------------------------------------\nRuntimeError                              Traceback (most recent call last)\n<ipython-input-27-c38b252ffe5f> in <module>\n      2 a = torch.tensor([2, 4, 6], requires_grad=True, dtype=torch.float)\n      3 adding_tensor = torch.rand(3)\n----> 4 b = a.add_(adding_tensor)\n      5 c = torch.sum(b)\n      6 c.backward()\n\nRuntimeError: a leaf Variable that requires grad has been used in an in-place operation.\n
\n\n
\n\n

Secondly the non in-place version:

\n\n
import torch\na = torch.tensor([2, 4, 6], requires_grad=True, dtype=torch.float)\nadding_tensor = torch.rand(3)\nb = a.add(adding_tensor)\nc = torch.sum(b)\nc.backward()\nprint(c.grad_fn)\n
\n\n

Which works just fine - output:

\n\n
<SumBackward0 object at 0x7f06b27a1da0>\n
\n\n
\n\n

So as a take-away I just wanted to point out to carefully use in-place operations in PyTorch.

\n", "system": "" }, { "instruction": "PyTorch CUDA vs Numpy for arithmetic operations? Fastest?", "input": "", "output": "

GPU operations have to additionally get memory to/from the GPU

\n

The problem is that your GPU operation always has to put the input on the GPU memory, and\nthen retrieve the results from there, which is a quite costly operation.

\n

NumPy, on the other hand, directly processes the data from the CPU/main memory, so there is almost no delay here. Additionally, your matrices are extremely small, so even in the best-case scenario, there should only be a minute difference.

\n

This is also partially the reason why you use mini-batches when training on a GPU in neural networks: Instead of having several extremely small operations, you now have "one big bulk" of numbers that you can process in parallel.
\nAlso note that GPU clock speeds are generally way lower than CPU clocks, so the GPU only really shines because it has way more cores. If your matrix does not utilize all of them fully, you are also likely to see a faster result on your CPU.

\n

TL;DR: If your matrix is big enough, you will eventually see a speed-up in CUDA than Numpy, even with the additional cost of the GPU transfer.

\n", "system": "" }, { "instruction": "PyTorch NotImplementedError in forward", "input": "", "output": "

please look carefully at the indentation of your __init__ function: your forward is part of __init__ not part of your module.

\n", "system": "" }, { "instruction": "How to free up all memory pytorch is taken from gpu memory", "input": "", "output": "

Try delete the object with del and then apply torch.cuda.empty_cache(). The reusable memory will be freed after this operation.

\n", "system": "" }, { "instruction": "In-place operations with PyTorch", "input": "", "output": "

I am not sure about how much in-place operation affect performance but I can address the second query. You can use a mask instead of in-place ops.

\n\n
a = torch.rand((2), requires_grad=True)\nprint('a ', a)\nb = torch.rand(2)\n\n# calculation\nc = a + b\n\n# performing in-place operation\nmask = np.zeros(2)\nmask[1] =1\nmask = torch.tensor(mask)\nc = c*mask\n...\n
\n", "system": "" }, { "instruction": "Higher order gradients in pytorch", "input": "", "output": "

You can just iterate calling the grad function:

\n\n
import torch\nfrom torch.autograd import grad\n\ndef nth_derivative(f, wrt, n):\n\n    for i in range(n):\n\n        grads = grad(f, wrt, create_graph=True)[0]\n        f = grads.sum()\n\n    return grads\n\nx = torch.arange(4, requires_grad=True).reshape(2, 2)\nloss = (x ** 4).sum()\n\nprint(nth_derivative(f=loss, wrt=x, n=3))\n
\n\n

outputs

\n\n
tensor([[  0.,  24.],\n        [ 48.,  72.]])\n
\n", "system": "" }, { "instruction": "pytorch, AttributeError: module 'torch' has no attribute 'Tensor'", "input": "", "output": "

The Python binary that you are running does not have torch installed. It does have a directory named torch on the module search path, and it is treated as a namespace package:

\n\n
$ pwd\n/some/path\n$ python3 -c 'import torch; print(torch); print(torch.__path__)'\nTraceback (most recent call last):\n  File \"<string>\", line 1, in <module>\nModuleNotFoundError: No module named 'torch'\n$ mkdir torch\n$ python3 -c 'import torch; print(torch); print(torch.__path__)'\n<module 'torch' (namespace)>\n_NamespacePath(['/some/path/torch'])\n
\n\n

Any directory without a __init__.py file present in it, located on your module search path, will be treated as a namespace, provided no other Python modules or packages by that name are found anywhere else along the search path.

\n\n

This means that if torch was installed for your Python binary, it doesn't matter if there is a local torch directory:

\n\n
$ ls -ld torch/\ndrwxr-xr-x  2 mjpieters users  68 Nov 23 13:57 torch/\n$ mkdir -p additional_path/torch/\n$ touch additional_path/torch/__init__.py\n$ PYTHONPATH=\"./additional_path\" python3 -c 'import os.path as p, sys; print(*(t for t in (p.join(e, \"torch\") for e in sys.path) if p.exists(t)), sep=\"\\n\")'\ntorch\n/some/path/additional_path/torch\n$ PYTHONPATH=\"./additional_path\" python3 -c 'import torch; print(torch); print(torch.__path__)'\n<module 'torch' from '/some/path/additional_path/torch/__init__.py'>\n['/some/path/additional_path/torch']\n
\n\n

The above shows that sys.path lists the torch directory first, followed by additional_path/torch, but the latter is loaded as the torch module when you try to import it. That's because Python gives priority to top-level modules and packages before loading a namespace package.

\n\n

You need to install torch correctly for your current Python binary, see the project homepage; when using pip you may want to use the Python binary with the -m switch instead:

\n\n
python3.5 -m pip install http://download.pytorch.org/whl/cu80/torch-0.2.0.post3-cp35-cp35m-manylinux1_x86_64.whl \npython3.5 -m pip install torchvision\n
\n\n

So replace the pip3 the homepage instructions use with python3.5 -m pip; python3.5 can also be the full path to your Python binary.

\n\n

Do use the correct download.pytorch.org URL for the latest version.

\n\n

You don't have to move the directory aside, but if you do want to and don't know where it is located, use print(torch.__path__) as I've shown above.

\n\n

Again, note that if you do have an __init__.py file in a local torch directory, it becomes a regular package and it'll mask packages installed by pip into the normal site-packages location. If you have such a package, or a local torch.py single-file module, you need to rename those. The diagnostic information looks different in that case:

\n\n
$ pwd\n/some/path\n$ python3 -c 'import torch; print(torch); print(torch.__path__)'\nTraceback (most recent call last):\n  File \"<string>\", line 1, in <module>\nModuleNotFoundError: No module named 'torch'\n$ mkdir torch\n$ touch torch/__init__.py  # make it a package\n$ python3 -c 'import torch; print(torch); print(torch.__path__)'\n<module 'torch' from '/some/path/torch/__init__.py'>\n['/some/path/torch']\n$ rm -rf torch/\n$ touch torch.py           # make it a module\n$ python3 -c 'import torch; print(torch); print(torch.__file__)'\n<module 'torch' from '/some/path/torch.py'>\n/some/path/torch.py\n
\n\n

Note the differences; a namespace package, above, uses <module 'name' (namespace)>, while a regular package uses ), while a plain module uses`.

\n\n

Such packages and modules (not namespace packages) are found first and stop the search. If the found package or module is not the one you wanted, you need to move them aside or rename them.

\n", "system": "" }, { "instruction": "Is there a function to extract image patches in PyTorch?", "input": "", "output": "

Unfortunately, there might not be a direct way to achieve your goal.
\nBut Tensor.unfold function might be a solution.
\nhttps://discuss.pytorch.org/t/how-to-extract-smaller-image-patches-3d/16837/2
\nThis website might help you.

\n", "system": "" }, { "instruction": "Include .whl installation in requirements.txt", "input": "", "output": "

You can use environment markers:

\n\n
http://download.pytorch.org/whl/cu75/torch-0.1.12.post2-cp27-none-linux_x86_64.whl ; sys_platform == \"linux\"\nhttp://download.pytorch.org/whl/cu75/torch-0.1.12.post2-cp27-none-linux_x86_64.whl ; sys_platform == \"linux2\"\nhttp://download.pytorch.org/whl/torch-0.1.12.post2-cp27-none-macosx_10_7_x86_64.whl ; sys_platform == \"darwin\"\ntorchvision\n
\n\n

(Double Linux entries: linux2 for Python 2, linux for Python 3.)

\n", "system": "" }, { "instruction": "Unique values in PyTorch tensor", "input": "", "output": "

There is a torch.unique() method in 0.4.0

\n\n

In torch <= 0.3.1 you can try:

\n\n
import torch\nimport numpy as np\n\nx = torch.rand((3,3)) * 10\nnp.unique(x.round().numpy())\n
\n", "system": "" }, { "instruction": "Pytorch: Convert FloatTensor into DoubleTensor", "input": "", "output": "

Your numpy arrays are 64-bit floating point and will be converted to torch.DoubleTensor standardly. Now, if you use them with your model, you'll need to make sure that your model parameters are also Double. Or you need to make sure, that your numpy arrays are cast as Float, because model parameters are standardly cast as float.

\n\n

Hence, do either of the following:

\n\n
data_utils.TensorDataset(torch.from_numpy(X).float(), torch.from_numpy(Y).float())\n
\n\n

or do:

\n\n
model.double()\n
\n\n

Depeding, if you want to cast your model parameters, inputs and targets as Float or as Double.

\n", "system": "" }, { "instruction": "PyTorch on M1 Mac: RuntimeError: Placeholder storage has not been allocated on MPS device", "input": "", "output": "

A possible issue with your code may be that you are not sending the inputs to the device inside your training loop. You should send both the model and the inputs to the device, as you can read about in this blog post.

\n

An example code would be the following:

\n
def train(model, train_loader, device, *args):\n    model.train()\n\n    for it, batch in tqdm(enumerate(train_loader), desc="Epoch %s: " % (epoch), total=train_loader.__len__()):\n        batch = {'data': batch['data'].to(device), 'labels': batch['labels'].to(device)}\n\n        # perform training\n        ...\n\n# set model and device\nmodel = MyWonderfulModel(*args)\ndevice = torch.device("mps" if torch.backends.mps.is_available() else "cpu")\nmodel.to(device)\n\n# call training function\ntrain(model, train_loader, device, *args)\n
\n

Running such training function on my M1 Mac works using MPS.

\n", "system": "" }, { "instruction": "How can I handle this datasets to create a datasetDict?", "input": "", "output": "

to get the validation dataset, you can do like this:

\n
train_dataset, validation_dataset= train_dataset.train_test_split(test_size=0.1).values()\n
\n

This function will divide 10% of the train dataset into the validation dataset.

\n

and to obtain "DatasetDict", you can do like this:

\n
import datasets\ndd = datasets.DatasetDict({"train":train_dataset,"test":test_dataset})\n
\n", "system": "" }, { "instruction": "Calling the forward method in PyTorch vs. calling the model instance", "input": "", "output": "

You should avoid calling Module.forward.\nThe difference is that all the hooks are dispatched in the __call__ function see this, so if you call .forward and have hooks in your model, the hooks won\u2019t have any effect.

\n

Inshort when you call Module.forward, pytorch hooks wont have any effect

\n

Detailed answer can be found in this post

\n", "system": "" }, { "instruction": "Difference between torch.flatten() and nn.Flatten()", "input": "", "output": "

Flattening is available in three forms in PyTorch

\n\n

All three are identical and share the same implementation, the only difference being nn.Flatten has start_dim set to 1 by default to avoid flattening the first axis (usually the batch axis). While the other two flatten from axis=0 to axis=-1 - i.e. the entire tensor - if no arguments are given.

\n", "system": "" }, { "instruction": "PyTorch - RuntimeError: [enforce fail at inline_container.cc:209] . file not found: archive/data.pkl", "input": "", "output": "

In my case, my disk drive was full. Clear some space and try again, deleting any partial files if necessary.

\n", "system": "" }, { "instruction": "Proper Usage of PyTorch's non_blocking=True for Data Prefetching", "input": "", "output": "

I think where you are off is that output = model(images) is a synchronization point. It seems the computation is handled by a different part of a GPU. Quote from official PyTorch docs:

\n
\n

Also, once you pin a tensor or storage, you can use asynchronous GPU\ncopies. Just pass an additional non_blocking=True argument to a\nto() or a cuda() call. This can be used to overlap data\ntransfers with computation.

\n
\n", "system": "" }, { "instruction": "Pytorch: Weight in cross entropy loss", "input": "", "output": "

To compute class weight of your classes use sklearn.utils.class_weight.compute_class_weight(class_weight, *, classes, y) read it here
\nThis will return you an array i.e weight.
\neg .

\n
x = torch.randn(20, 5) \ny = torch.randint(0, 5, (20,)) # classes\nclass_weights=class_weight.compute_class_weight('balanced',np.unique(y),y.numpy())\nclass_weights=torch.tensor(class_weights,dtype=torch.float)\n \nprint(class_weights) #([1.0000, 1.0000, 4.0000, 1.0000, 0.5714])\n
\n

Then pass it to nn.CrossEntropyLoss's weight variable

\n
criterion = nn.CrossEntropyLoss(weight=class_weights,reduction='mean')\n\nloss = criterion(...)\n
\n", "system": "" }, { "instruction": "Difficulty in understanding the tokenizer used in Roberta model", "input": "", "output": "

This question is extremely broad, so I'm trying to give an answer that focuses on the main problem at hand. If you feel the need to have other questions answered, please open another question focusing on one question at a time, see the [help/on-topic] rules for Stackoverflow.

\n\n

Essentially, as you've correctly identified, BPE is central to any tokenization in modern deep networks. I highly recommend you to read the original BPE paper by Sennrich et al., in which they also highlight a bit more of the history of BPEs.
\nIn any case, the tokenizers for any of the huggingface models are pretrained, meaning that they are usually generated from the training set of the algorithm beforehand. Common implementations such as SentencePiece also give a bit better understanding of it, but essentially the task is framed as a constrained optimization problem, where you specify a maximum number of k allowed vocabulary words (the constraint), and the algorithm tries to then keep as many words intact without exceeding k.

\n\n

if there are not enough words to cover the whole vocabulary, smaller units are used to approximate the vocabulary, which results in the splits observed in the example you gave.\nRoBERTa uses a variant called \"byte-level BPE\", the best explanation is probably given in this study by Wang et al.. The main benefit is, that it results in a smaller vocabulary while maintaining the quality of splits, from what I understand.

\n\n

The second part of your question is easier to explain; while BERT highlights the merging of two subsequent tokens (with ##), RoBERTa's tokenizer instead highlights the start of a new token with a specific unicode character (in this case, \\u0120, the G with a dot). The best reason I could find for this was this thread, which argues that it basically avoids the use of whitespaces in training.

\n", "system": "" }, { "instruction": "How to calculate unbalanced weights for BCEWithLogitsLoss in pytorch", "input": "", "output": "

The PyTorch documentation for BCEWithLogitsLoss recommends the pos_weight to be a ratio between the negative counts and the positive counts for each class.

\n

So, if len(dataset) is 1000, element 0 of your multihot encoding has 100 positive counts, then element 0 of the pos_weights_vector should be 900/100 = 9. That means that the binary crossent loss will behave as if the dataset contains 900 positive examples instead of 100.

\n

Here is my implementation:

\n

(new, based on this post)

\n

pos_weight = (y==0.).sum()/y.sum()

\n

(original)

\n
  def calculate_pos_weights(class_counts):\n    pos_weights = np.ones_like(class_counts)\n    neg_counts = [len(data)-pos_count for pos_count in class_counts]\n    for cdx, pos_count, neg_count in enumerate(zip(class_counts,  neg_counts)):\n      pos_weights[cdx] = neg_count / (pos_count + 1e-5)\n\n    return torch.as_tensor(pos_weights, dtype=torch.float)\n
\n

Where class_counts is just a column-wise sum of the positive samples. I posted it on the PyTorch forum and one of the PyTorch devs gave it his blessing.

\n", "system": "" }, { "instruction": "What does "RuntimeError: CUDA error: device-side assert triggered" in PyTorch mean?", "input": "", "output": "

When I shifted my code to work on CPU instead of GPU, I got the following error:

\n

IndexError: index 128 is out of bounds for dimension 0 with size 128

\n

So, perhaps there might be a mistake in the code which for some strange reason comes out as a CUDA error.

\n", "system": "" }, { "instruction": "ModuleNotFoundError: No module named 'torch._C'", "input": "", "output": "

In my case, I installed the dependency using pip install and it didn't work. I reinstalled torch using conda install instead and it seemed to work!

\n", "system": "" }, { "instruction": "Why does PyTorch not find my NVDIA drivers for CUDA support?", "input": "", "output": "

Since you had two graphic cards, selecting a card ID CUDA_VISIBLE_DEVICES=GPU_ID should fix the problem as per this explanation.

\n", "system": "" }, { "instruction": "Calling cuda() with async results in SyntaxError", "input": "", "output": "

Your code does not work because:

\n\n
\n

cuda(device=None, non_blocking=False) \u2192 Tensor

\n
\n\n

Use instead non_blocking:

\n

The argument non_blocking has the same effect as async previously had:

\n
\n\n
\n
\n
\n

As an add-on: If you are interested in what async is actually used for you can take a look here:\nhttps://www.python.org/dev/peps/pep-0492/#new-syntax

\n", "system": "" }, { "instruction": "Bidirectional LSTM output question in PyTorch", "input": "", "output": "

Yes, when using a BiLSTM the hidden states of the directions are just concatenated (the second part after the middle is the hidden state for feeding in the reversed sequence).
So splitting up in the middle works just fine.

\n\n

As reshaping works from the right to the left dimensions you won't have any problems in separating the two directions.

\n\n
\n\n

Here is a small example:

\n\n
# so these are your original hidden states for each direction\n# in this case hidden size is 5, but this works for any size\ndirection_one_out = torch.tensor(range(5))\ndirection_two_out = torch.tensor(list(reversed(range(5))))\nprint('Direction one:')\nprint(direction_one_out)\nprint('Direction two:')\nprint(direction_two_out)\n\n# before outputting they will be concatinated \n# I'm adding here batch dimension and sequence length, in this case seq length is 1\nhidden = torch.cat((direction_one_out, direction_two_out), dim=0).view(1, 1, -1)\nprint('\\nYour hidden output:')\nprint(hidden, hidden.shape)\n\n# trivial case, reshaping for one hidden state\nhidden_reshaped = hidden.view(1, 1, 2, -1)\nprint('\\nReshaped:')\nprint(hidden_reshaped, hidden_reshaped.shape)\n\n# This works as well for abitrary sequence lengths as you can see here\n# I've set sequence length here to 5, but this will work for any other value as well\nprint('\\nThis also works for more multiple hidden states in a tensor:')\nmulti_hidden = hidden.expand(5, 1, 10)\nprint(multi_hidden, multi_hidden.shape)\nprint('Directions can be split up just like this:')\nmulti_hidden = multi_hidden.view(5, 1, 2, 5)\nprint(multi_hidden, multi_hidden.shape)\n
\n\n

Output:

\n\n
Direction one:\ntensor([0, 1, 2, 3, 4])\nDirection two:\ntensor([4, 3, 2, 1, 0])\n\nYour hidden output:\ntensor([[[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]]]) torch.Size([1, 1, 10])\n\nReshaped:\ntensor([[[[0, 1, 2, 3, 4],\n          [4, 3, 2, 1, 0]]]]) torch.Size([1, 1, 2, 5])\n\nThis also works for more multiple hidden states in a tensor:\ntensor([[[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],\n\n        [[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],\n\n        [[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],\n\n        [[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],\n\n        [[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]]]) torch.Size([5, 1, 10])\nDirections can be split up just like this:\ntensor([[[[0, 1, 2, 3, 4],\n          [4, 3, 2, 1, 0]]],\n\n\n        [[[0, 1, 2, 3, 4],\n          [4, 3, 2, 1, 0]]],\n\n\n        [[[0, 1, 2, 3, 4],\n          [4, 3, 2, 1, 0]]],\n\n\n        [[[0, 1, 2, 3, 4],\n          [4, 3, 2, 1, 0]]],\n\n\n        [[[0, 1, 2, 3, 4],\n          [4, 3, 2, 1, 0]]]]) torch.Size([5, 1, 2, 5])\n
\n\n

Hope this helps! :)

\n", "system": "" }, { "instruction": "pytorch skip connection in a sequential model", "input": "", "output": "

Your observations are correct, but you may have missed the definition of UnetSkipConnectionBlock.forward() (UnetSkipConnectionBlock being the Module defining the U-Net block you shared), which may clarify this implementation:

\n\n

(from pytorch-CycleGAN-and-pix2pix/models/networks.py#L259)\n

\n\n
# Defines the submodule with skip connection.\n# X -------------------identity---------------------- X\n#   |-- downsampling -- |submodule| -- upsampling --|\nclass UnetSkipConnectionBlock(nn.Module):\n\n    # ...\n\n    def forward(self, x):\n        if self.outermost:\n            return self.model(x)\n        else:\n            return torch.cat([x, self.model(x)], 1)\n
\n\n

The last line is the key (applied for all inner blocks). The skip layer is simply done by concatenating the input x and the (recursive) block output self.model(x), with self.model the list of operations you mentioned -- so not so differently from the Functional code you wrote.

\n", "system": "" }, { "instruction": "How to iterate over two dataloaders simultaneously using pytorch?", "input": "", "output": "

If you want to iterate over two datasets simultaneously, there is no need to define your own dataset class just use TensorDataset like below:

\n\n
dataset = torch.utils.data.TensorDataset(dataset1, dataset2)\ndataloader = DataLoader(dataset, batch_size=128, shuffle=True)\nfor index, (xb1, xb2) in enumerate(dataloader):\n    ....\n
\n\n

If you want the labels or iterating over more than two datasets just feed them as an argument to the TensorDataset after dataset2.

\n", "system": "" }, { "instruction": "Why can GPU do matrix multiplication faster than CPU?", "input": "", "output": "

How do you parallelize the computations?

\n\n

GPU's are able to do a lot of parallel computations. A Lot more than a CPU could do.\nLook at this example of vector addition of let's say 1M elements.

\n\n

Using a CPU let's say you have 100 maximum threads you can run :\n(100 is lot more but let's assume for a while)

\n\n

In a typical multi-threading example let's say you parallelized additions on all threads.

\n\n

Here is what I mean by it :

\n\n
c[0] = a[0] + b[0] # let's do it on thread 0\nc[1] = a[1] + b[1] # let's do it on thread 1\nc[101] = a[101] + b[101] # let's do it on thread 1\n
\n\n

We are able to do it because value of c[0], doesn't depend upon any other values except a[0] and b[0]. So each addition is independent of others. Hence, we were able to easily parallelize the task.

\n\n

As you see in above example that simultaneously all the addition of 100 different elements take place saving you time. In this way it takes 1M/100 = 10,000 steps to add all the elements.

\n\n
\n\n

How Efficient does GPU Parallelizes?

\n\n

Now consider today's GPU with about 2048 threads, all threads can independently do 2048 different operations in constant time. Hence giving a boost up.

\n\n

In your case of matrix multiplication. You can parallelize the computations, Because GPU have much more threads and in each thread you have multiple blocks. So a lot of computations are parallelized, resulting quick computations.

\n\n
\n\n
\n

But I didn't write any parallel processing for my GTX1080! Does it do it by itself?

\n
\n\n

Almost all the framework for machine learning uses parallelized implementation of all the possible operations. This is achieved by CUDA programming, NVIDIA API to do parallel computations on NVIDIA GPU's. You don't write it explicitly, it's all done at low level, and you do not even get to know.

\n\n

Yes it doesn't mean that a C++ program you wrote will automatically be parallelized, just because you have a GPU.\nNo, you need to write it using CUDA, only then it will be parallelized, but most programming framework have it, So it is not required from your end.

\n", "system": "" }, { "instruction": "what's the difference between torch.Tensor() vs torch.empty() in pytorch?", "input": "", "output": "

torch.Tensor() is just an alias to torch.FloatTensor() which is the default type of tensor, when no dtype is specified during tensor construction.

\n\n

From the torch for numpy users notes, it seems that torch.Tensor() is a drop-in replacement of numpy.empty()

\n\n

So, in essence torch.FloatTensor() and torch.empty() does the same job of returning a tensor filled with garbage values of dtype torch.float32. Below is a small run:

\n\n
In [87]: torch.FloatTensor(2, 3)\nOut[87]: \ntensor([[-1.0049e+08,  4.5688e-41, -8.9389e-38],\n        [ 3.0638e-41,  4.4842e-44,  0.0000e+00]])\n\nIn [88]: torch.FloatTensor(2, 3)\nOut[88]: \ntensor([[-1.0049e+08,  4.5688e-41, -1.6512e-38],\n        [ 3.0638e-41,  4.4842e-44,  0.0000e+00]])\n
\n\n
\n\n
In [89]: torch.empty(2, 3)\nOut[89]: \ntensor([[-1.0049e+08,  4.5688e-41, -9.0400e-38],\n        [ 3.0638e-41,  4.4842e-44,  0.0000e+00]])\n\nIn [90]: torch.empty(2, 3)\nOut[90]: \ntensor([[-1.0049e+08,  4.5688e-41, -9.2852e-38],\n        [ 3.0638e-41,  4.4842e-44,  0.0000e+00]])\n
\n", "system": "" }, { "instruction": "How can I make a FloatTensor with requires_grad=True from a numpy array using PyTorch 0.4.0?", "input": "", "output": "

How can I make a FloatTensor with requires_grad=True from a numpy array using PyTorch 0.4.0, preferably in a single line?

\n\n

If x is your numpy array this line should do the trick:

\n\n
torch.tensor(x, requires_grad=True)\n
\n\n

Here is a full example tested with PyTorch 0.4.0:

\n\n
import numpy as np\nimport torch\n\nx = np.array([1.3, 0.5, 1.9, 2.45])\nprint('np.array:', x)\nt = torch.tensor(x, requires_grad=True)\nprint('tensor:', t)\nprint('requires_grad:', t.requires_grad)\n
\n\n

This gives the following output:

\n\n
np.array: [1.3  0.5  1.9  2.45]\ntensor: tensor([ 1.3000,  0.5000,  1.9000,  2.4500], dtype=torch.float64)\nrequires_grad: True\n
\n\n

Edit: dtype should be determined by the given dtype of your numpy array x.

\n\n

I hope this helps.

\n", "system": "" }, { "instruction": "How to correctly implement a batch-input LSTM network in PyTorch?", "input": "", "output": "

Question 1 - Last Timestep

\n\n

This is the code that i use to get the output of the last timestep. I don't know if there is a simpler solution. If it is, i'd like to know it. I followed this discussion and grabbed the relative code snippet for my last_timestep method. This is my forward.

\n\n
class BaselineRNN(nn.Module):\n    def __init__(self, **kwargs):\n        ...\n\n    def last_timestep(self, unpacked, lengths):\n        # Index of the last output for each sequence.\n        idx = (lengths - 1).view(-1, 1).expand(unpacked.size(0),\n                                               unpacked.size(2)).unsqueeze(1)\n        return unpacked.gather(1, idx).squeeze()\n\n    def forward(self, x, lengths):\n        embs = self.embedding(x)\n\n        # pack the batch\n        packed = pack_padded_sequence(embs, list(lengths.data),\n                                      batch_first=True)\n\n        out_packed, (h, c) = self.rnn(packed)\n\n        out_unpacked, _ = pad_packed_sequence(out_packed, batch_first=True)\n\n        # get the outputs from the last *non-masked* timestep for each sentence\n        last_outputs = self.last_timestep(out_unpacked, lengths)\n\n        # project to the classes using a linear layer\n        logits = self.linear(last_outputs)\n\n        return logits\n
\n\n

Question 2 - Masked Cross Entropy Loss

\n\n

Yes, by default the zero padded timesteps (targets) matter. However, it is very easy to mask them. You have two options, depending on the version of PyTorch that you use.

\n\n
    \n
  1. PyTorch 0.2.0: Now pytorch supports masking directly in the CrossEntropyLoss, with the ignore_index argument. For example, in language modeling or seq2seq, where i add zero padding, i mask the zero padded words (target) simply like this:

    \n\n

    loss_function = nn.CrossEntropyLoss(ignore_index=0)

  2. \n
  3. PyTorch 0.1.12 and older: In the older versions of PyTorch, masking was not supported, so you had to implement your own workaround. I solution that i used, was masked_cross_entropy.py, by jihunchoi. You may be also interested in this discussion.

  4. \n
\n", "system": "" }, { "instruction": "How to convert Pytorch autograd.Variable to Numpy?", "input": "", "output": "

Two possible case

\n\n\n", "system": "" }, { "instruction": "ModuleNotFoundError: No module named 'torch._C'", "input": "", "output": "

In my case, I installed the dependency using pip install and it didn't work. I reinstalled torch using conda install instead and it seemed to work!

\n", "system": "" }, { "instruction": "How to move PyTorch model to GPU on Apple M1 chips?", "input": "", "output": "

This is what I used:

\n
if torch.backends.mps.is_available():\n    mps_device = torch.device("mps")\n    G.to(mps_device)\n    D.to(mps_device)\n
\n

Similarly for all tensors that I want to move to M1 GPU, I used:

\n
tensor_ = tensor_(mps_device)\n
\n

Some operations are ot yet implemented using MPS, and we might need to set a few environment variables to use CPU fall back instead:\nOne error that I faced during executing the script was

\n
# NotImplementedError: The operator 'aten::_slow_conv2d_forward' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.\n
\n

To solve it I set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1

\n
conda env config vars set PYTORCH_ENABLE_MPS_FALLBACK=1\nconda activate <test-env>\n
\n

References:

\n
    \n
  1. https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/
  2. \n
  3. https://pytorch.org/docs/master/notes/mps.html
  4. \n
  5. https://sebastianraschka.com/blog/2022/pytorch-m1-gpu.html
  6. \n
  7. https://sebastianraschka.com/blog/2022/pytorch-m1-gpu.html
  8. \n
  9. https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#setting-environment-variables
  10. \n
\n", "system": "" }, { "instruction": "Indexing a tensor with None in PyTorch", "input": "", "output": "

Similar to NumPy you can insert a singleton dimension ("unsqueeze" a dimension) by indexing this dimension with None. In turn n[:, None] will have the effect of inserting a new dimension on dim=1. This is equivalent to n.unsqueeze(dim=1):

\n
>>> n = torch.rand(3, 100, 100)\n\n>>> n[:, None].shape\n(3, 1, 100, 100)\n\n>>> n.unsqueeze(1).shape\n(3, 1, 100, 100)\n
\n
\n

Here are some other types of None indexings.

\n

In the example above : is was used as a placeholder to designate the first dimension dim=0. If you want to insert a dimension on dim=2, you can add a second : as n[:, :, None].

\n

You can also place None with respect to the last dimension instead. To do so you can use the ellipsis syntax ...:

\n\n", "system": "" }, { "instruction": "Difference between autograd.grad and autograd.backward?", "input": "", "output": "

TLDR; Both are different interfaces for gradient computation: torch.autograd.grad is non-mutable while torch.autograd.backward is.

\n
\n

Descriptions

\n

The torch.autograd module is the automatic differentiation package for PyTorch. As described in the documentation it only requires minimal change to the code base to be used:

\n
\n

you only need to declare Tensors for which gradients should be computed with the requires_grad=True keyword.

\n
\n

The two main functions torch.autograd provides for gradient computation are torch.autograd.backward and torch.autograd.grad:

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
torch.autograd.backward (source)torch.autograd.grad (source)
DescriptionComputes the sum of gradients of given tensors with respect to graph leaves.Computes and returns the sum of gradients of outputs with respect to the inputs.
Headertorch.autograd.backward(
tensors,
grad_tensors=None,
retain_graph=None,
create_graph=False,
grad_variables=None,
inputs=None)
torch.autograd.grad(
outputs,
inputs,
grad_outputs=None,
retain_graph=None,
create_graph=False,
only_inputs=True,
allow_unused=False)
Parameters- tensors \u2013 Tensors of which the derivative will be computed.
- grad_tensors \u2013 The "vector" in the Jacobian-vector product, usually gradients w.r.t. each element of corresponding tensors.
- retain_graph \u2013 If False, the graph used to compute the grad will be freed. [...]
- inputs \u2013 Inputs w.r.t. which the gradient be will be accumulated into .grad. All other Tensors will be ignored. If not provided, the gradient is accumulated into all the leaf Tensors that were used [...].
- outputs \u2013 outputs of the differentiated function.
- inputs \u2013 Inputs w.r.t. which the gradient will be returned (and not accumulated into .grad).
- grad_outputs \u2013 The "vector" in the Jacobian-vector product, usually gradients w.r.t. each element of corresponding tensors.
- retain_graph \u2013 If False, the graph used to compute the grad will be freed. [...].
\n
\n

Usage examples

\n

In terms of high-level usage, you can look at torch.autograd.grad as a non-mutable function. As mentioned in the documentation table above, it will not accumulate the gradients on the grad attribute but instead return the computed partial derivatives. In contrast torch.autograd.backward will be able to mutate the tensors by updating the grad attribute of leaf nodes, the function won't return any value. In other words, the latter is more suitable when computing gradients for a large number of parameters.

\n

In the following, we will take two inputs (x1 and, x2), calculate a tensor y with them, and then compute the partial derivatives of the result w.r.t both inputs, i.e. dL/dx1 and dL/dx2:

\n
>>> x1 = torch.rand(1, requires_grad=True)\n>>> x2 = torch.rand(1, requires_grad=True)\n>>> x1, x2\n(tensor(0.3939, grad_fn=<UnbindBackward>),\n tensor(0.7965, grad_fn=<UnbindBackward>))\n
\n

Inference:

\n
>>> y = x1**2 + 5*x2\n>>> y\ntensor(4.1377, grad_fn=<AddBackward0>)\n
\n

Since y was computed using tensor(s) requiring gradients (i.e. with requires_grad=True) - *outside of a torch.no_grad context. It will have a grad_fn function attached. This callback is used to backpropagate onto the computation graph to compute the gradients of preceding tensor nodes.

\n\n
\n

In conclusion: both perform the same operation. They are two different interfaces to interact with the autograd library and perform gradient computations. The latter, torch.autograd.backward (equivalent to torch.Tensor.backward), is generally used in neural networks training loops to compute the partial derivative of the loss w.r.t each one of the model's parameters.

\n

You can read more about how torch.autograd.grad works by reading through this other answer I made on: Meaning of grad_outputs in PyTorch's torch.autograd.grad.

\n", "system": "" }, { "instruction": "PyTorch CUDA error: an illegal memory access was encountered", "input": "", "output": "

It was partially said by the answer of the OP, but the problem under the hood with illegal memory access is that the GPU runs out of memory.

\n

In my case, when I run a script on Windows I get the error message:

\n
RuntimeError: CUDA out of memory. Tried to allocate 1.64 GiB (GPU 0; 4.00 GiB total capacity; 1.10 GiB already allocated; 1.27 GiB free; 1.12 GiB reserved in total by PyTorch)\n
\n

but when run on Linux I get:

\n
RuntimeError: CUDA error: an illegal memory access was encountered\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\n
\n

Perhaps the message in Windows is more understandable :)

\n

References:\nhttps://forums.fast.ai/t/runtimeerror-cuda-error-an-illegal-memory-access-was-encountered/93899

\n", "system": "" }, { "instruction": "Unable to import pytorch_lightning on google colab", "input": "", "output": "

As said in Issue #6415 on Github, try installing from the GitHub.
\nIt worked for me.

\n
!pip install git+https://github.com/PyTorchLightning/pytorch-lightning\nimport pytorch_lightning as pl\nprint(pl.__version__)\n
\n

Output:

\n
1.3.0dev\n
\n

It seems that the error is coming from Issue #6210 and they say it was fixed. I guess it wasn't uploaded to PyPi.

\n", "system": "" }, { "instruction": "How to convert one-hot vector to label index and back in Pytorch?", "input": "", "output": "

From the Pytorch forums

\n
import torch\nimport numpy as np\n\n\nlabels = torch.randint(0, 10, (10,))\n\n# labels --> one-hot \none_hot = torch.nn.functional.one_hot(labels)\n# one-hot --> labels\nlabels_again = torch.argmax(one_hot, dim=1)\n\nnp.testing.assert_equals(labels.numpy(), labels_again.numpy())\n
\n", "system": "" }, { "instruction": "How to make a Trainer pad inputs in a batch with huggingface-transformers?", "input": "", "output": "

Look at the columns your tokenizer is returning. You might wanna limit it to only the required columns.

\n

For Example

\n
def preprocess_function(examples):\n   #function to tokenize the dataset.\n   if sentence2_key is None:\n       return tokenizer(examples[sentence1_key], truncation=True, padding=True)\n   return tokenizer(examples[sentence1_key], examples[sentence2_key], truncation=True, padding=True)\n\n\nencoded_dataset = dataset.map(preprocess_function, batched=True, load_from_cache_file=False)\n\n\n#Thing you should do is \n\ncolumns_to_return = ['input_ids', 'label', 'attention_mask']\nencoded_dataset.set_format(type='torch', columns=columns_to_return)\n
\n", "system": "" }, { "instruction": "pytorch dataset map-style vs iterable-style", "input": "", "output": "

I wrote a short post on how to use PyTorch datasets, and the difference between map-style and iterable-style dataset.

\n

In essence, you should use map-style datasets when possible. Map-style datasets give you their size ahead of time, are easier to shuffle, and allow for easy parallel loading.

\n

It\u2019s a common misconception that if your data doesn\u2019t fit in memory, you have to use iterable-style dataset. That is not true. You can implement a map-style dataset such that it retrives data as needed.

\n

Check out the full post here.

\n", "system": "" }, { "instruction": "Save only best weights with huggingface transformers", "input": "", "output": "

You may try the following parameters from trainer in the huggingface

\n
training_args = TrainingArguments(\n    output_dir='/content/drive/results',          # output directory\n    do_predict= True, \n    num_train_epochs=3,              # total number of training epochs\n    **per_device_train_batch_size=4,  # batch size per device during training\n    per_device_eval_batch_size=2**,   # batch size for evaluation\n    warmup_steps=1000,                # number of warmup steps for learning rate  \n    save_steps=1000,\n    save_total_limit=10,\n    load_best_model_at_end= True,\n    weight_decay=0.01,               # strength of weight decay\n    logging_dir='./logs',            # directory for storing logs\n    logging_steps=0, evaluate_during_training=True)\n
\n

There may be better ways to avoid too many checkpoints and selecting the best model.\nSo far you can not save only the best model, but you check when the evaluation yields better results than the previous one.

\n", "system": "" }, { "instruction": "How to understand hidden_states of the returns in BertModel?(huggingface-transformers)", "input": "", "output": "
\n

hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True): Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

\n
\n
\n

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

\n
\n

For a given token, its input representation is constructed by summing the corresponding token embedding, segment embedding, and position embedding. This input representation is called the initial embedding output which can be found at index 0 of the tuple hidden_states.\nThis figure explains how the embeddings are calculated.\n\"enter

\n

The remaining 12 elements in the tuple contain the output of the corresponding hidden layer. E.g: the last hidden layer can be found at index 12, which is the 13th item in the tuple. The dimension of both the initial embedding output and the hidden states are [batch_size, sequence_length, hidden_size]. It would be useful to compare the indexing of hidden_states bottom-up with this image from the BERT paper.

\n

\"enter

\n", "system": "" }, { "instruction": "what does padding_idx do in nn.embeddings()", "input": "", "output": "

padding_idx is indeed quite badly described in the documentation.

\n\n

Basically, it specifies which index passed during call will mean \"zero vector\" (which is quite often used in NLP in case some token is missing). By default no index will mean \"zero vector\", as you can see in the example below:

\n\n
import torch\n\nembedding = torch.nn.Embedding(10, 3)\ninput = torch.LongTensor([[0, 1, 0, 5]])\nprint(embedding(input))\n
\n\n

Will give you:

\n\n
tensor([[[ 0.1280, -1.1390, -2.5007],\n         [ 0.3617, -0.9280,  1.2894],\n         [ 0.1280, -1.1390, -2.5007],\n         [-1.3135, -0.0229,  0.2451]]], grad_fn=<EmbeddingBackward>)\n
\n\n

If you specify padding_idx=0 every input where the value is equal to 0 (so zero-th and second row) will be zero-ed out like this (code: embedding = torch.nn.Embedding(10, 3, padding_idx=0)):

\n\n
tensor([[[ 0.0000,  0.0000,  0.0000],\n         [-0.4448, -0.2076,  1.1575],\n         [ 0.0000,  0.0000,  0.0000],\n         [ 1.3602, -0.6299, -0.5809]]], grad_fn=<EmbeddingBackward>\n
\n\n

If you were to specify padding_idx=5 last row would be full of zeros etc.

\n", "system": "" }, { "instruction": "AttributeError: 'torch.return_types.max' object has no attribute 'dim' - Maxpooling Channel", "input": "", "output": "

The torch.max function called with dim returns a tuple so:

\n
class ChannelPool(nn.Module):\n    def forward(self, input):\n        input_max, max_indices = torch.max(input, dim=1)\n        return input_max\n
\n

From the documentation of torch.max:

\n
\n

Returns a namedtuple (values, indices) where values is the maximum value of each row of the input tensor in the given dimension dim. And indices is the index location of each maximum value found (argmax).

\n
\n", "system": "" }, { "instruction": "Pytorch: RuntimeError: reduce failed to synchronize: cudaErrorAssert: device-side assert triggered", "input": "", "output": "

This kind of error generally occurs when using NLLLoss or CrossEntropyLoss, and when your dataset has negative labels (or labels greater than the number of classes). That is also the exact error you are getting Assertion t >= 0 && t < n_classes failed.

\n\n

This won't occur for MSELoss, but OP mentions that there is a CrossEntropyLoss somewhere and thus the error occurs (the program crashes asynchronously on some other line). The solution is to clean the dataset and ensure that t >= 0 && t < n_classes is satisfied (where t represents the label).

\n\n

Also, ensure that your network output is in the range 0 to 1 in case you use NLLLoss or BCELoss (then you require softmax or sigmoid activation respectively). Note that this is not required for CrossEntropyLoss or BCEWithLogitsLoss because they implement the activation function inside the loss function. (Thanks to @PouyaB for pointing out).

\n", "system": "" }, { "instruction": "Make GPU available again after numba.cuda.close()?", "input": "", "output": "

could you provide a more complete snippet, I am running

\n
from numba import cuda\nimport torch \ndevice = cuda.get_current_device()\ndevice.reset()\ncuda.close()\ntorch.cuda.isavailable()\n
\n

which prints True, not sure what is your issue?

\n", "system": "" }, { "instruction": "How to disable progress bar in Pytorch Lightning", "input": "", "output": "

F.Y.I. show_progress_bar=False deprecated since version 0.7.2, but you can use progress_bar_refresh_rate=0

\n
\n

update:

\n

progress_bar_refresh_rate has been deprecated in v1.5 and will be removed in v1.7. To disable the progress bar, set enable_progress_bar to false

\n
progress_bar_refresh_rate: How often to refresh progress bar (in steps). Value ``0`` disables progress bar.\n    Ignored when a custom progress bar is passed to :paramref:`~Trainer.callbacks`. Default: None, means\n    a suitable value will be chosen based on the environment (terminal, Google COLAB, etc.).\n\n    .. deprecated:: v1.5\n        ``progress_bar_refresh_rate`` has been deprecated in v1.5 and will be removed in v1.7.\n        Please pass :class:`~pytorch_lightning.callbacks.progress.TQDMProgressBar` with ``refresh_rate``\n        directly to the Trainer's ``callbacks`` argument instead. To disable the progress bar,\n        pass ``enable_progress_bar = False`` to the Trainer.\n\nenable_progress_bar: Whether to enable to progress bar by default.\n
\n", "system": "" }, { "instruction": "Pinning memory is actually slower in PyTorch?", "input": "", "output": "

TL:DR

\n

Your code is slower, because you allocate a new block of pinned memory each time you call the generator. Allocating new memory each time requires synchronization each time making it much slower than non-pinned memory. Likely, you are measuring this overhead.

\n

Your code example in the edit fails in the THCCachingHostAllocator.cpp. It's not the GPU running out of memory, but your host denying you to allocate 68GB of pinned physical memory.

\n
\n
\n

Pinning memory is actually slower in PyTorch?

\n
\n

Creating or releasing pinned memory (cudaHostAlloc()/cudaFreeHost() via the CUDA Runtime) is much slower than malloc/free because it involves synchronization between the devices (GPU and host). Likely, what you are measuring is - to a large extent - this overhead, as you are incrementally allocating pinned memory.

\n
\n

Shouldn't pinning memory make data transfer asynchronous and therefore be faster? If that's not the case, why would we do pin memory?

\n
\n

It can, but not if you halt/join to synchronize before each transfer in order to allocate the memory.

\n

What pinning memory ultimately does is that it prevents the memory block from being swapped out by the OS; it is guaranteed to remain in RAM. This guarantee enables the GPU's DMA to operate on that block without going through the CPU (which has to check, among other things, if the data needs to be swapped back in). Thus, the CPU is free to do other stuff in the meantime.

\n

It is not a perfect analogy, but you could think about pinned memory as shared memory between the GPU and the host. Both parties can operate on it without informing the other party; a bit like multiple threads in a process. This can be much faster if you implement non-blocking code. However, it can also be much slower if parties end up joining all the time.

\n

Contrast this to the non-pinned approach, where the CPU loads the data from RAM (swapped in if necessary) and then sends it to the GPU. Not only is it slower (needs to go through the northbridge twice), but it also keeps the thread (and hence one CPU core) busy. Python also has the infamous GIL, so it could be that your entire application is waiting for that synchronous I/O.

\n

If you want to use pinned memory to shuffle batches of data into the GPU, then one way to do it is to use pinned memory as a (circular) buffer. The CPU can load the data from disk, apply preprocessing, and place the batch into the buffer. The GPU can then fetch batches from the buffer in its own time and do the inference. If the implementation is done well, then the GPU will not idle more than necessary, and there is no more need for synchronization between the host and the GPU.

\n
\n

And if I do want to pin a small tensor, why don't I directly move the whole tensor into GPU memory in advance?

\n
\n

If you don't need to access the tensor from the CPU and it fits onto the GPU, then there is indeed no need to put it into pinned memory.

\n

In your example, you are opening a memory-mapped numpy array memmap, and then ask to transfer it to pinned memory. A memory-mapped file works very similar to paged memory in that data that doesn't fit the RAM anymore is flushed to disk, and loaded back in when it is accessed again.

\n

This "swapping" can not happen for pinned memory, because we need to guarantee that the entire block resides in RAM at all dimes. Hence, we need to first load the entire array into host memory - a contiguous block of 68 GB -, likely creating a copy of the array in the process to not destroy the memmap object, and then we need to pin that memory block, telling the host to forfeit 68GB of managed physical memory to our application. Either of these two steps can be denied by the OS and raise an OutOfMemory error.

\n

This is pretty much what you are seeing, as you fail in the THCCachingHostAllocator.cpp.

\n", "system": "" }, { "instruction": "Does pytorch apply softmax automatically in nn.Linear", "input": "", "output": "

Latching on to what @jodag was already saying in his comment, and extending it a bit to form a full answer:

\n\n

No, PyTorch does not automatically apply softmax, and you can at any point apply torch.nn.Softmax() as you want. But, softmax has some issues with numerical stability, which we want to avoid as much as we can. One solution is to use log-softmax, but this tends to be slower than a direct computation.

\n\n

Especially when we are using Negative Log Likelihood as a loss function (in PyTorch, this is torch.nn.NLLLoss, we can utilize the fact that the derivative of (log-)softmax+NLLL is actually mathematically quite nice and simple, which is why it makes sense to combine the both into a single function/element. The result is then torch.nn.CrossEntropyLoss. Again, note that this only applies directly to the last layer of your network, any other computation is not affected by any of this.

\n", "system": "" }, { "instruction": "Is there a pytorch method to check the number of cpus?", "input": "", "output": "

just use this :

\n\n
os.cpu_count()\n
\n", "system": "" }, { "instruction": "In Colaboratory, CUDA cannot be used for Pytorch", "input": "", "output": "

Click on Runtime and select Change runtime type.

\n

Now in Hardware Acceleration, select GPU and hit Save.

\n", "system": "" }, { "instruction": "Calling super's forward() method", "input": "", "output": "

TLDR;

\n

You can use super().forward(...) freely even with hooks and even with hooks registered in super() instance.

\n

Explanation

\n

As stated by this answer __call__ is here so the registered hooks (e.g. register_forward_hook) will be run.

\n

If you inherit and want to reuse base class's forward, e.g. this:

\n
import torch\n\n\nclass Parent(torch.nn.Module):\n    def forward(self, tensor):\n        return tensor + 1\n\n\nclass Child(Parent):\n    def forward(self, tensor):\n        return super(Child, self).forward(tensor) + 1\n\n\nmodule = Child()\n# Increment output by 1 so we should get `4`\nmodule.register_forward_hook(lambda module, input, output: output + 1)\nprint(module(torch.tensor(1))) # and it is 4 indeed\nprint(module.forward(torch.tensor(1))) # here it is 3 still\n
\n

You are perfectly fine if you call __call__ method, forward won't run the hook (so you get 3 as above).

\n

It is unlikely you would like to register_hook on the instance of super , but let's consider such example:

\n
def increment_by_one(module, input, output):\n    return output + 1\n\n\nclass Parent(torch.nn.Module):\n    def forward(self, tensor):\n        return tensor + 1\n\n\nclass Child(Parent):\n    def forward(self, tensor):\n        # Increment by `1` from Parent\n        super().register_forward_hook(increment_by_one)\n        return super().forward(tensor) + 1\n\n\nmodule = Child()\n# Increment output by 1 so we should get `5` in total\nmodule.register_forward_hook(increment_by_one)\nprint(module(torch.tensor(1)))  # and it is 5 indeed\nprint(module.forward(torch.tensor(1)))  # here is 3\n
\n

You are perfectly fine using super().forward(...) and even hooks will work correctly (and that is the main idea of using __call__ instead of forward).

\n

BTW. Calling super().__call__(...) would raise InifiniteRecursion error.

\n", "system": "" }, { "instruction": "Running LSTM with multiple GPUs gets "Input and hidden tensors are not at the same device"", "input": "", "output": "

When you call .cuda() on the tensor, Pytorch moves it to the current GPU device by default (GPU-0). So, due to data parallelism, your data lives in a different GPU while your model goes to another, this results in the runtime error you are facing.

\n\n

The correct way to implement data parallelism for recurrent neural networks is as follows:

\n\n
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence\n\nclass MyModule(nn.Module):\n    # ... __init__, other methods, etc.\n\n    # padded_input is of shape [B x T x *] (batch_first mode) and contains\n    # the sequences sorted by lengths\n    #   B is the batch size\n    #   T is max sequence length\n    def forward(self, padded_input, input_lengths):\n        total_length = padded_input.size(1)  # get the max sequence length\n        packed_input = pack_padded_sequence(padded_input, input_lengths,\n                                            batch_first=True)\n        packed_output, _ = self.my_lstm(packed_input)\n        output, _ = pad_packed_sequence(packed_output, batch_first=True,\n                                        total_length=total_length)\n        return output\n\nm = MyModule().cuda()\ndp_m = nn.DataParallel(m)\n
\n\n

You also need to set the CUDA_VISIBLE_DEVICES environment variable accordingly for a multi GPU setup.

\n\n

References:

\n\n\n", "system": "" }, { "instruction": "Convert PyTorch CUDA tensor to NumPy array", "input": "", "output": "

Use .detach() to convert from GPU / CUDA Tensor to numpy array:

\n
tensor.detach().cpu().numpy()\n
\n", "system": "" }, { "instruction": "Check if PyTorch tensors are equal within epsilon", "input": "", "output": "

At the time of writing, this is a undocumented function in the latest stable release (0.4.1), but the documentation is in the master (unstable) branch.

\n\n

torch.allclose() will return a boolean indicating whether all element-wise differences are equal allowing for a margin of error.

\n\n

Additionally, there's the undocumented isclose():

\n\n
>>> torch.isclose(torch.Tensor([1]), torch.Tensor([1.00000001]))\ntensor([1], dtype=torch.uint8)\n
\n", "system": "" }, { "instruction": "Pytorch 0.4.0: There are three ways to create tensors on CUDA device. Is there some difference between them?", "input": "", "output": "

All three methods worked for me.

\n\n

In 1 and 2, you create a tensor on CPU and then move it to GPU when you use .to(device) or .cuda(). They are the same here.

\n\n

However, when you use .to(device) method you can explicitly tell torch to move to specific GPU by setting device=torch.device(\"cuda:<id>\"). with .cuda() you have to do .cuda(<id>) to move to some particular GPU.

\n\n
\n\n

Why do these two methods exist then?

\n\n

.to(device) was introduced in 0.4 because it is easier to declare device variable at top of the code as

\n\n

device = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")

\n\n

and use .to(device) everywhere. This makes it quite easy to switch from CPU to GPU and vice-versa

\n\n

Before this, we had to use .cuda() and your code will have if check for cuda.is_available() everywhere which made it cumbersome to switch between GPU/CPU.

\n\n
\n\n

The third method doesn't create a tensor on the CPU and directly copies data to GPU, which is more efficient.

\n", "system": "" }, { "instruction": "What does flatten_parameters() do?", "input": "", "output": "

It may not be a full answer to your question. But, if you give a look at the flatten_parameters's source code , you will notice that it calls _cudnn_rnn_flatten_weight in

\n\n
...\nNoGradGuard no_grad;\ntorch::_cudnn_rnn_flatten_weight(...)\n...\n
\n\n

is the function that does the job. You will find that what it actually does is copying the model's weights into a vector<Tensor> (check the params_arr declaration) in:

\n\n
  // Slice off views into weight_buf\n  std::vector<Tensor> params_arr;\n  size_t params_stride0;\n  std::tie(params_arr, params_stride0) = get_parameters(handle, rnn, rnn_desc, x_desc, w_desc, weight_buf);\n\n  MatrixRef<Tensor> weight{weight_arr, static_cast<size_t>(weight_stride0)},\n                    params{params_arr, params_stride0};\n
\n\n

And the weights copying in

\n\n
  // Copy weights\n  _copyParams(weight, params);\n
\n\n

Also note that they update (or Reset as they explicitly say in docs) the original pointers of weights with the new pointers of params by doing an in-place operation .set_ (_ is their notation for the in-place operations) in orig_param.set_(new_param.view_as(orig_param));

\n\n
  // Update the storage\n  for (size_t i = 0; i < weight.size(0); i++) {\n    for (auto orig_param_it = weight[i].begin(), new_param_it = params[i].begin();\n         orig_param_it != weight[i].end() && new_param_it != params[i].end();\n         orig_param_it++, new_param_it++) {\n      auto orig_param = *orig_param_it, new_param = *new_param_it;\n      orig_param.set_(new_param.view_as(orig_param));\n    }\n  }\n
\n\n

And according to n2798 (draft of C++0x)

\n\n

\u00a9ISO/IECN3092

\n\n
\n

23.3.6 Class template vector

\n \n

A vector is a sequence container that supports random access iterators. In addition, it supports (amortized)constant time insert and erase operations at the end; insert and erase in the middle take linear time. Storage management is handled automatically, though hints can be given to improve efficiency. The elements of a vector are stored contiguously, meaning that if v is a vector <T, Allocator> where T is some type other than bool, then it obeys the identity&v[n] == &v[0] + n for all 0 <= n < v.size().

\n
\n\n
\n\n

In some situations

\n\n
\n

UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greately increasing memory usage. To compact weights again call flatten_parameters().

\n
\n\n

They explicitly advise people in code warnings to have a contiguous chunk of memory.

\n", "system": "" }, { "instruction": "Understanding Bilinear Layers", "input": "", "output": "

The operation done by nn.Bilinear is B(x1, x2) = x1*A*x2 + b (c.f. doc) with:

\n\n\n\n

If you take into account the (optional) bias, you should obtain the expected results.

\n\n
\n\n
import torch\nimport torch.nn as nn\n\ndef manual_bilinear(x1, x2, A, b):\n    return torch.mm(x1, torch.mm(A, x2)) + b\n\nx_ones = torch.ones(2)\nx_zeros = torch.zeros(2)\n\n# ---------------------------\n# With Bias:\n\nB = nn.Bilinear(2, 2, 1)\nA = B.weight\nprint(B.bias)\n# > tensor([-0.6748], requires_grad=True)\nb = B.bias\n\nprint(B(x_ones, x_zeros))\n# > tensor([-0.6748], grad_fn=<ThAddBackward>)\nprint(manual_bilinear(x_ones.view(1, 2), x_zeros.view(2, 1), A.squeeze(), b))\n# > tensor([[-0.6748]], grad_fn=<ThAddBackward>)\n\nprint(B(x_ones, x_ones))\n# > tensor([-1.7684], grad_fn=<ThAddBackward>)\nprint(manual_bilinear(x_ones.view(1, 2), x_ones.view(2, 1), A.squeeze(), b))\n# > tensor([[-1.7684]], grad_fn=<ThAddBackward>)\n\n# ---------------------------\n# Without Bias:\n\nB = nn.Bilinear(2, 2, 1, bias=False)\nA = B.weight\nprint(B.bias)\n# None\nb = torch.zeros(1)\n\nprint(B(x_ones, x_zeros))\n# > tensor([0.], grad_fn=<ThAddBackward>)\nprint(manual_bilinear(x_ones.view(1, 2), x_zeros.view(2, 1), A.squeeze(), b))\n# > tensor([0.], grad_fn=<ThAddBackward>)\n\nprint(B(x_ones, x_ones))\n# > tensor([-0.7897], grad_fn=<ThAddBackward>)\nprint(manual_bilinear(x_ones.view(1, 2), x_ones.view(2, 1), A.squeeze(), b))\n# > tensor([[-0.7897]], grad_fn=<ThAddBackward>)\n
\n", "system": "" }, { "instruction": "Is there any pytorch function can combine the specific continuous dimensions of tensor into one?", "input": "", "output": "

There is a variant of flatten that takes start_dim and end_dim parameters. You can call it in the same way as your magic_combine (except that end_dim is inclusive).

\n
a = torch.zeros(1, 2, 3, 4, 5, 6)  \nb = a.flatten(2, 4) # combine dimension 2, 3, 4 \nprint(b.size()) # should be (1, 2, 60, 6)\n
\n

https://pytorch.org/docs/stable/generated/torch.flatten.html

\n

There is also a corresponding unflatten, in which you can specify a dimension to unflatten and a shape to unflatten it to.

\n", "system": "" }, { "instruction": "How are PyTorch's tensors implemented?", "input": "", "output": "

Contiguous array

\n\n

The commonly used way to store such data is in a single array that is laid out as a single, contiguous block within memory. More concretely, a 3x3x3 tensor would be stored simply as a single array of 27 values, one after the other.

\n\n

The only place where the dimensions are used is to calculate the mapping between the (many) coordinates and the offset within that array. For example, to fetch the item [3, 1, 1] you would need to know if it is a 3x3x3 matrix, a 9x3x1 matrix, or a 27x1x1 matrix - in all cases the \"storage\" would be 27 items long, but the interpretation of \"coordinates\" would be different. If you use zero-based indexing, the calculation is trivial, but you need to know the length of each dimension.

\n\n

This does mean that resizing and similar operations may require copying the whole array, but that's ok, you trade off the performance of those (rare) operations to gain performance for the much more common operations, e.g. sequential reads.

\n", "system": "" }, { "instruction": "PyTorch's dataloader "too many open files" error when no files should be open", "input": "", "output": "

I know how to fix the error, but I don't have a complete explanation for why it happens.

\n\n

First, the solution: you need to make sure that the image data is stored as numpy.arrays, when you call json.loads it loads them as python lists of floats. This causes the torch.utils.data.DataLoader to individually transform each float in the list into a torch.DoubleTensor.

\n\n

Have a look at default_collate in torch.utils.data.DataLoader - your __getitem__ returns a dict which is a mapping, so default_collate gets called again on each element of the dict. The first couple are ints, but then you get to the image data which is a list, i.e. a collections.Sequence - this is where things get funky as default_collate is called on each element of the list. This is clearly not what you intended. I don't know what the assumption in torch is about the contents of a list versus a numpy.array, but given the error it would appear that that assumption is being violated.

\n\n

The fix is pretty trivial, just make sure the two image bands are numpy.arrays, for instance in __init__

\n\n
def __init__(self,data,transform=None):\n    self.data=[]\n    for d in data:\n        d[self.BAND1] = np.asarray(d[self.BAND1])\n        d[self.BAND2] = np.asarray(d[self.BAND2])\n        self.data.append(d)\n    self.transform=transform\n
\n\n

or after you load the json, what ever - doesn't really matter where you do it, as long as you do it.

\n\n
\n\n

Why does the above results in too many open files?

\n\n

I don't know, but as the comments pointed out, it is likely to do with interprocess communication and lock files on the two queues data is taken from and added to.

\n\n

Footnote: the train.json was not available for download from Kaggle due to the competition still being open (??). I made a dummy json file that should have the same structure and tested the fix on that dummy file.

\n", "system": "" }, { "instruction": "Custom loss function in PyTorch", "input": "", "output": "

Let me have a go.

\n\n
    \n
  1. This depends on what you mean by \"non-differentiable\". The first definition that makes sense here is that PyTorch doesn't know how to compute gradients. If you try to compute gradients nevertheless, this will raise an error. The two possible scenarios are:

    \n\n

    a) You're using a custom PyTorch operation for which gradients have not been implemented, e.g. torch.svd(). In that case you will get a TypeError:

    \n\n
    import torch\nfrom torch.autograd import Function\nfrom torch.autograd import Variable\n\nA = Variable(torch.randn(10,10), requires_grad=True)\nu, s, v = torch.svd(A) # raises TypeError\n
    \n\n

    b) You have implemented your own operation, but did not define backward(). In this case, you will get a NotImplementedError:

    \n\n
    class my_function(Function): # forgot to define backward()\n\n    def forward(self, x):\n        return 2 * x\n\nA = Variable(torch.randn(10,10))\nB = my_function()(A)\nC = torch.sum(B)\nC.backward() # will raise NotImplementedError\n
    \n\n

    The second definition that makes sense is \"mathematically non-differentiable\". Clearly, an operation which is mathematically not differentiable should either not have a backward() method implemented or a sensible sub-gradient. Consider for example torch.abs() whose backward() method returns the subgradient 0 at 0:

    \n\n
    A = Variable(torch.Tensor([-1,0,1]),requires_grad=True)\nB = torch.abs(A)\nB.backward(torch.Tensor([1,1,1]))\nA.grad.data\n
    \n\n

    For these cases, you should refer to the PyTorch documentation directly and dig out the backward() method of the respective operation directly.

  2. \n
  3. It doesn't matter. The use of requires_gradis to avoid unnecessary computations of gradients for subgraphs. If there\u2019s a single input to an operation that requires gradient, its output will also require gradient. Conversely, only if all inputs don\u2019t require gradient, the output also won\u2019t require it. Backward computation is never performed in the subgraphs, where all Variables didn\u2019t require gradients.

    \n\n

    Since, there are most likely some Variables (for example parameters of a subclass of nn.Module()), your loss Variable will also require gradients automatically. However, you should notice that exactly for how requires_grad works (see above again), you can only change requires_grad for leaf variables of your graph anyway.

  4. \n
  5. All the custom PyTorch loss functions, are subclasses of _Loss which is a subclass of nn.Module. See here. If you'd like to stick to this convention, you should subclass _Loss when defining your custom loss function. Apart from consistency, one advantage is that your subclass will raise an AssertionError, if you haven't marked your target variables as volatile or requires_grad = False. Another advantage is that you can nest your loss function in nn.Sequential(), because its a nn.Module I would recommend this approach for these reasons.

  6. \n
\n", "system": "" }, { "instruction": "What is recommended number of threads for pytorch based on available CPU cores?", "input": "", "output": "

PyTorch typically uses the number of physical CPU cores as the default number of threads. This means:\ntorch.get_num_threads() and torch.get_num_interop_threads() typically return the number of physical CPU cores.

\n\n

For example:

\n
import torch\n\n# Get current number of threads\nnum_threads = torch.get_num_threads()\nprint(f"Current number of threads: {num_threads}")\n\n# Set custom number of threads (e.g., equal to physical cores)\ntorch.set_num_threads(num_threads)\ntorch.set_num_interop_threads(num_threads)\n\n# Check new settings\nprint(f"New number of threads: {torch.get_num_threads()}")\nprint(f"New number of inter-op threads: {torch.get_num_interop_threads()}")\n
\n", "system": "" }, { "instruction": "Python: BERT Error - Some weights of the model checkpoint at were not used when initializing BertModel", "input": "", "output": "

As R. Marolahy suggests, if you don't want to see this every time, I know I don't, add the following:

\n
from transformers import logging\nlogging.set_verbosity_error()\n
\n", "system": "" }, { "instruction": "Why embed dimemsion must be divisible by num of heads in MultiheadAttention?", "input": "", "output": "

From what I understood, it is a simplification they have added to keep things simple. Theoretically, we can implement the model like you proposed (similar to the original paper).\nIn pytorch documention, they have briefly mentioned it.

\n
Note that `embed_dim` will be split across `num_heads` (i.e. each head will have dimension `embed_dim` // `num_heads`)\n
\n

Also, if you see the Pytorch implementation, you can see it is a bit different (optimised in my point of view) when comparing to the originally proposed model. For example, they use MatMul instead of Linear and Concat layer is ignored. Refer the below which shows the first encoder (with Btach size 32, 10 words, 512 features).

\n

\"enter

\n

P.s:\nIf you need to see the model params (like the above image), this is the code I used.

\n
import torch\ntransformer_model = torch.nn.Transformer(d_model=512, nhead=8, num_encoder_layers=1,num_decoder_layers=1,dim_feedforward=11)  # change params as necessary\ntgt = torch.rand((20, 32, 512))\nsrc = torch.rand((11, 32, 512))\ntorch.onnx.export(transformer_model, (src, tgt), "transformer_model.onnx")\n
\n", "system": "" }, { "instruction": "With BERT Text Classification, ValueError: too many dimensions 'str' error occuring", "input": "", "output": "

REASON

\n

The issue is you are passing a list of strings (str) in torch.tensor() , it only accepts the list of numerical values (integer, float etc.) .

\n

SOLUTION

\n

So I suggest you to convert your string labels into integer values before passing it to the torch.tensor().

\n

IMPLEMENTATION

\n

Following code might help you

\n
# a temporary list to store the string labels\ntemp_list = train_labels.tolist()\n\n# dictionary that maps integer to its string value \nlabel_dict = {}\n\n# list to store integer labels \nint_labels = []\n\nfor i in range(len(temp_list)):\n    label_dict[i] = temp_list[i]\n    int_labels.append(i)\n
\n

Now pass this int_labels to the torch.tensor and use it as label.

\n
train_y = torch.tensor(int_labels)\n
\n

and whenever you want to see the respective string label of any integer just use label_dict dictionary.

\n", "system": "" }, { "instruction": "load pytorch dataloader into GPU", "input": "", "output": "

you can put your data of dataset in advance

\n
train_dataset.train_data.to(CTX)  #train_dataset.train_data is a Tensor(input data)\ntrain_dataset.train_labels.to(CTX)\n
\n

for example of minst

\n
import torch\nfrom torch.utils.data import DataLoader\nfrom torchvision import datasets\nfrom torchvision import transforms\nbatch_size = 64\ntransform = transforms.Compose([\n    transforms.ToTensor(),\n    transforms.Normalize((0.1307,), (0.3081,))\n])\ntrain_data = datasets.MNIST(\n    root='./dataset/minst/',\n    train=True,\n    download=False,\n    transform=transform\n)\ntrain_loader = DataLoader(\n    dataset=train_data,\n    shuffle=True,\n    batch_size=batch_size\n)\ntrain_data.train_data = train_data.train_data.to(torch.device("cuda:0"))  # put data into GPU entirely\ntrain_data.train_labels = train_data.train_labels.to(torch.device("cuda:0"))\n# edit note for newer versions: use train_data.data and train_data.targets instead\n
\n

I got this solution by using debugger...

\n", "system": "" }, { "instruction": "pytorch DataLoader extremely slow first epoch", "input": "", "output": "

Slavka,

\n

TLDR: This is a caching effect.

\n

I did not download the whole GLR2020 dataset but I was able to observe this effect on the image dataset that I had locally (80000 jpg images of approx 400x400 size).

\n

To find the reasons for the difference in performance I tried the following:

\n
    \n
  1. reducing the augmentation to just resizing
  2. \n
  3. testing just ImgDataset.__getitem__() function
  4. \n
  5. ImgDataset.__getitem__() without augmentation
  6. \n
  7. just loading the raw jpg image and passing it from the dataset without even numpy conversion.
  8. \n
\n

It turns out that the difference comes from the image loading timing. Python (or OS itself) implements some kind of caching which is observed when loading image multiple times in the following test.

\n
for i in range(5):    \n    t0 = time.time()\n    data = cv2.imread(filename)\n    print (time.time() - t0)\n    \n0.03395271301269531\n0.0010004043579101562\n0.0010004043579101562\n0.0010008811950683594\n0.001001119613647461\n
\n

same is observed when just reading from file to variable

\n
for i in range(5):    \n    t0 = time.time()\n    with open(filename, mode='rb') as file: \n        data = file.read()\n    print (time.time() - t0)\n\n0.036234378814697266\n0.0028831958770751953\n0.0020024776458740234\n0.0031833648681640625\n0.0028734207153320312\n
\n

One way to reduce the loading speed is to keep the data on very fast local SSD. If size allows, try loading part of the dataset into RAM and writing custom dataloader to feed from there...

\n

BTW Based on my findings this effect should be reproducible with any dataset - see if you used different drives or some caching.

\n", "system": "" }, { "instruction": "How to convert a tensor of booleans to ints in PyTorch?", "input": "", "output": "

The solution is just a single line of code.

\n

To convert a tensor t with values [True, False, True, False] to an integer tensor, just do the following.

\n
t = torch.tensor([True, False, True, False])\nt_integer = t.long()\nprint(t_integer)\n[1, 0, 1, 0]\n
\n", "system": "" }, { "instruction": "Pytorch: AttributeError: 'function' object has no attribute 'copy'", "input": "", "output": "

I am guessing this is what you did by mistake.\nYou saved the function

\n\n

torch.save(model.state_dict, 'model_state.pth')

\n\n

instead of the state_dict()

\n\n

torch.save(model.state_dict(), 'model_state.pth')

\n\n

Otherwise, everything should work as expected. (I tested the following code on Colab)

\n\n

Replace model.state_dict() with model.state_dict to reproduce error

\n\n
import copy\nmodel = TheModelClass()\ntorch.save(model.state_dict(), 'model_state.pth')\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\nmodel.load_state_dict(copy.deepcopy(torch.load(\"model_state.pth\",device)))\n
\n", "system": "" }, { "instruction": "How to index/slice the last dimension of a PyTorch tensor/numpy array of unknown dimensions", "input": "", "output": "

PyTorch support NumPy-like indexing so you can use Ellipsis(...)

\n
>>> z[..., -1:]\n
\n

Example:

\n
>>> x                     # (2,2) tensor\ntensor([[0.5385, 0.9280],\n        [0.8937, 0.0423]])\n>>> x[..., -1:]\ntensor([[0.9280],\n        [0.0423]])\n
\n
>>> y                     # (2,2,2) tensor\ntensor([[[0.5610, 0.8542],\n         [0.2902, 0.2388]],\n\n        [[0.2440, 0.1063],\n         [0.7201, 0.1010]]])\n>>> y[..., -1:]\ntensor([[[0.8542],\n         [0.2388]],\n\n        [[0.1063],\n         [0.1010]]])\n
\n\n", "system": "" }, { "instruction": "Converting python list to pytorch tensor", "input": "", "output": "

You can directly convert python list to a pytorch Tensor by defining the dtype. For example,

\n
import torch\n\na_list = [3,23,53,32,53] \na_tensor = torch.Tensor(a_list)\nprint(a_tensor.int())\n\n>>> tensor([3,23,53,32,53])\n
\n", "system": "" }, { "instruction": "CNN Pytorch Error : Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same", "input": "", "output": "

Cast your input x_batch to float. Use x_batch = x_batch.float() before you pass it through your model.

\n", "system": "" }, { "instruction": "Multi dimensional inputs in pytorch Linear method?", "input": "", "output": "

Newer versions of PyTorch allows nn.Linear to accept N-D input tensor, the only constraint is that the last dimension of the input tensor will equal in_features of the linear layer. The linear transformation is then applied on the last dimension of the tensor.
\nFor instance, if in_features=5 and out_features=10 and the input tensor x has dimensions 2-3-5, then the output tensor will have dimensions 2-3-10.

\n", "system": "" }, { "instruction": "PyTorch Binary Classification - same network structure, 'simpler' data, but worse performance?", "input": "", "output": "

TL;DR

\n

Your input data is not normalized.

\n
    \n
  1. use x_data = (x_data - x_data.mean()) / x_data.std()
  2. \n
  3. increase the learning rate optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
  4. \n
\n

You'll get
\n\"enter

\n

convergence in only 1000 iterations.

\n

More details

\n

The key difference between the two examples you have is that the data x in the first example is centered around (0, 0) and has very low variance.
\nOn the other hand, the data in the second example is centered around 92 and has relatively large variance.

\n

This initial bias in the data is not taken into account when you randomly initialize the weights which is done based on the assumption that the inputs are roughly normally distributed around zero.
\nIt is almost impossible for the optimization process to compensate for this gross deviation - thus the model gets stuck in a sub-optimal solution.

\n

Once you normalize the inputs, by subtracting the mean and dividing by the std, the optimization process becomes stable again and rapidly converges to a good solution.

\n

For more details about input normalization and weights initialization, you can read section 2.2 in He et al Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (ICCV 2015).

\n

What if I cannot normalize the data?

\n

If, for some reason, you cannot compute mean and std data in advance, you can still use nn.BatchNorm1d to estimate and normalize the data as part of the training process. For example

\n
class Model(nn.Module):\n    def __init__(self, input_size, H1, output_size):\n        super().__init__()\n        self.bn = nn.BatchNorm1d(input_size)  # adding batchnorm\n        self.linear = nn.Linear(input_size, H1)\n        self.linear2 = nn.Linear(H1, output_size)\n    \n    def forward(self, x):\n        x = torch.sigmoid(self.linear(self.bn(x)))  # batchnorm the input x\n        x = torch.sigmoid(self.linear2(x))\n        return x\n
\n

This modification without any change to the input data, yields similar convergance after only 1000 epochs:
\n\"enter

\n

A minor comment

\n

For numerical stability, it is better to use nn.BCEWithLogitsLoss instead of nn.BCELoss. For this end, you need to remove the torch.sigmoid from the forward() output, the sigmoid will be computed inside the loss.
\nSee, for example, this thread regarding the related sigmoid + cross entropy loss for binary predictions.

\n", "system": "" }, { "instruction": "Multivariate input LSTM in pytorch", "input": "", "output": "

I hope that problematic parts are commented to make sense:

\n

Data preparation

\n
import random\nimport numpy as np\nimport torch\n\n# multivariate data preparation\nfrom numpy import array\nfrom numpy import hstack\n \n# split a multivariate sequence into samples\ndef split_sequences(sequences, n_steps):\n    X, y = list(), list()\n    for i in range(len(sequences)):\n        # find the end of this pattern\n        end_ix = i + n_steps\n        # check if we are beyond the dataset\n        if end_ix > len(sequences):\n            break\n        # gather input and output parts of the pattern\n        seq_x, seq_y = sequences[i:end_ix, :-1], sequences[end_ix-1, -1]\n        X.append(seq_x)\n        y.append(seq_y)\n    return array(X), array(y)\n \n# define input sequence\nin_seq1 = array([x for x in range(0,100,10)])\nin_seq2 = array([x for x in range(5,105,10)])\nout_seq = array([in_seq1[i]+in_seq2[i] for i in range(len(in_seq1))])\n# convert to [rows, columns] structure\nin_seq1 = in_seq1.reshape((len(in_seq1), 1))\nin_seq2 = in_seq2.reshape((len(in_seq2), 1))\nout_seq = out_seq.reshape((len(out_seq), 1))\n# horizontally stack columns\ndataset = hstack((in_seq1, in_seq2, out_seq))\n
\n

Multivariate LSTM Network

\n
class MV_LSTM(torch.nn.Module):\n    def __init__(self,n_features,seq_length):\n        super(MV_LSTM, self).__init__()\n        self.n_features = n_features\n        self.seq_len = seq_length\n        self.n_hidden = 20 # number of hidden states\n        self.n_layers = 1 # number of LSTM layers (stacked)\n    \n        self.l_lstm = torch.nn.LSTM(input_size = n_features, \n                                 hidden_size = self.n_hidden,\n                                 num_layers = self.n_layers, \n                                 batch_first = True)\n        # according to pytorch docs LSTM output is \n        # (batch_size,seq_len, num_directions * hidden_size)\n        # when considering batch_first = True\n        self.l_linear = torch.nn.Linear(self.n_hidden*self.seq_len, 1)\n        \n    \n    def init_hidden(self, batch_size):\n        # even with batch_first = True this remains same as docs\n        hidden_state = torch.zeros(self.n_layers,batch_size,self.n_hidden)\n        cell_state = torch.zeros(self.n_layers,batch_size,self.n_hidden)\n        self.hidden = (hidden_state, cell_state)\n    \n    \n    def forward(self, x):        \n        batch_size, seq_len, _ = x.size()\n        \n        lstm_out, self.hidden = self.l_lstm(x,self.hidden)\n        # lstm_out(with batch_first = True) is \n        # (batch_size,seq_len,num_directions * hidden_size)\n        # for following linear layer we want to keep batch_size dimension and merge rest       \n        # .contiguous() -> solves tensor compatibility error\n        x = lstm_out.contiguous().view(batch_size,-1)\n        return self.l_linear(x)\n
\n

Initialization

\n
n_features = 2 # this is number of parallel inputs\nn_timesteps = 3 # this is number of timesteps\n\n# convert dataset into input/output\nX, y = split_sequences(dataset, n_timesteps)\nprint(X.shape, y.shape)\n\n# create NN\nmv_net = MV_LSTM(n_features,n_timesteps)\ncriterion = torch.nn.MSELoss() # reduction='sum' created huge loss value\noptimizer = torch.optim.Adam(mv_net.parameters(), lr=1e-1)\n\ntrain_episodes = 500\nbatch_size = 16\n
\n

Training

\n
mv_net.train()\nfor t in range(train_episodes):\n    for b in range(0,len(X),batch_size):\n        inpt = X[b:b+batch_size,:,:]\n        target = y[b:b+batch_size]    \n        \n        x_batch = torch.tensor(inpt,dtype=torch.float32)    \n        y_batch = torch.tensor(target,dtype=torch.float32)\n    \n        mv_net.init_hidden(x_batch.size(0))\n    #    lstm_out, _ = mv_net.l_lstm(x_batch,nnet.hidden)    \n    #    lstm_out.contiguous().view(x_batch.size(0),-1)\n        output = mv_net(x_batch) \n        loss = criterion(output.view(-1), y_batch)  \n        \n        loss.backward()\n        optimizer.step()        \n        optimizer.zero_grad() \n    print('step : ' , t , 'loss : ' , loss.item())\n
\n

Results

\n
step :  499 loss :  0.0010267728939652443 # probably overfitted due to 500 training episodes\n
\n", "system": "" }, { "instruction": "What does layout = torch.strided mean?", "input": "", "output": "

strides is number of steps (or jumps) that is needed to go from one element to next element, in a given dimension. In computer memory, the data is stored linearly in a contiguous block of memory. What we view is just a (re)presentation.

\n

Let's take an example tensor for understanding this:

\n
# a 2D tensor\nIn [62]: tensor = torch.arange(1, 16).reshape(3, 5) \n\nIn [63]: tensor  \nOut[63]: \ntensor([[ 1,  2,  3,  4,  5],\n        [ 6,  7,  8,  9, 10],\n        [11, 12, 13, 14, 15]])\n
\n

With this tensor in place, the strides are:

\n
# get the strides\nIn [64]: tensor.stride() \nOut[64]: (5, 1)\n
\n

What this resultant tuple (5, 1) says is:

\n\n

The order (or index) of 5 & 1 in the tuple represents the dimension/axis. You can also pass the dimension, for which you want the stride, as an argument:

\n
# get stride for axis 0\nIn [65]: tensor.stride(0) \nOut[65]: 5\n\n# get stride for axis 1\nIn [66]: tensor.stride(1) \nOut[66]: 1\n
\n
\n

With that understanding, we might have to ask why is this extra parameter needed when we create the tensors? The answer to that is for efficiency reasons. (How can we store/read/access the elements in the (sparse) tensor most efficiently?).

\n

With sparse tensors (a tensor where most of the elements are just zeroes), so we don't want to store these values. we only store the non-zero values and their indices. With a desired shape, the rest of the values can then be filled with zeroes, yielding the desired sparse tensor.

\n
\n

For further reading on this, the following articles might be of help:

\n\n
\n

P.S: I guess there's a typo in the torch.layout documentation which says

\n
\n

Strides are a list of integers ...

\n
\n

The composite data type returned by tensor.stride() is a tuple, not a list.

\n", "system": "" }, { "instruction": "IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number", "input": "", "output": "

I found the solution in the github issues here

\n\n

Try to change

\n\n
i = order[0] # works for PyTorch 0.4.1.\n
\n\n

to

\n\n
i = order # works for PyTorch>=0.5.\n
\n", "system": "" }, { "instruction": "Conv1D with kernel_size=1 vs Linear layer", "input": "", "output": "

nn.Conv1d with a kernel size of 1 and nn.Linear give essentially the same results. The only differences are the initialization procedure and how the operations are applied (which has some effect on the speed). Note that using a linear layer should be faster as it is implemented as a simple matrix multiplication (+ adding a broadcasted bias vector)

\n

@RobinFrcd your answers are either different due to MaxPool1d or due to the different initialization procedure.

\n

Here are a few experiments to prove my claims:

\n
def count_parameters(model):\n    """Count the number of parameters in a model."""\n    return sum([p.numel() for p in model.parameters()])\n\nconv = torch.nn.Conv1d(8,32,1)\nprint(count_parameters(conv))\n# 288\n\nlinear = torch.nn.Linear(8,32)\nprint(count_parameters(linear))\n# 288\n\nprint(conv.weight.shape)\n# torch.Size([32, 8, 1])\nprint(linear.weight.shape)\n# torch.Size([32, 8])\n\n# use same initialization\nlinear.weight = torch.nn.Parameter(conv.weight.squeeze(2))\nlinear.bias = torch.nn.Parameter(conv.bias)\n\ntensor = torch.randn(128,256,8)\npermuted_tensor = tensor.permute(0,2,1).clone().contiguous()\n\nout_linear = linear(tensor)\nprint(out_linear.mean())\n# tensor(0.0067, grad_fn=<MeanBackward0>)\n\nout_conv = conv(permuted_tensor)\nprint(out_conv.mean())\n# tensor(0.0067, grad_fn=<MeanBackward0>)\n
\n

Speed test:

\n
%%timeit\n_ = linear(tensor)\n# 151 \u00b5s \u00b1 297 ns per loop\n\n%%timeit\n_ = conv(permuted_tensor)\n# 1.43 ms \u00b1 6.33 \u00b5s per loop\n
\n

As Hanchen's answer show, the results can differ very slightly due to numerical precision.

\n", "system": "" }, { "instruction": "Finding non-intersection of two pytorch tensors", "input": "", "output": "

I came across the same problem but the proposed solutions were far too slow when using larger arrays. The following simple solution works on CPU and GPU and is significantly faster than the other proposed solutions:

\n\n
combined = torch.cat((t1, t2))\nuniques, counts = combined.unique(return_counts=True)\ndifference = uniques[counts == 1]\nintersection = uniques[counts > 1]\n
\n", "system": "" }, { "instruction": "Is it possible to freeze only certain embedding weights in the embedding layer in pytorch?", "input": "", "output": "

1. Divide embeddings into two separate objects

\n

One approach would be to use two separate embeddings one for pretrained, another for the one to be trained.

\n

The GloVe one should be frozen, while the one for which there is no pretrained representation would be taken from the trainable layer.

\n

If you format your data that for pretrained token representations it is in smaller range than the tokens without GloVe representation it could be done. Let's say your pretrained indices are in the range [0, 300], while those without representation are [301, 500]. I would go with something along those lines:

\n
import numpy as np\nimport torch\n\n\nclass YourNetwork(torch.nn.Module):\n    def __init__(self, glove_embeddings: np.array, how_many_tokens_not_present: int):\n        self.pretrained_embedding = torch.nn.Embedding.from_pretrained(glove_embeddings)\n        self.trainable_embedding = torch.nn.Embedding(\n            how_many_tokens_not_present, glove_embeddings.shape[1]\n        )\n        # Rest of your network setup\n\n    def forward(self, batch):\n        # Which tokens in batch do not have representation, should have indices BIGGER\n        # than the pretrained ones, adjust your data creating function accordingly\n        mask = batch > self.pretrained_embedding.num_embeddings\n\n        # You may want to optimize it, you could probably get away without copy, though\n        # I'm not currently sure how\n        pretrained_batch = batch.copy()\n        pretrained_batch[mask] = 0\n\n        embedded_batch = self.pretrained_embedding(pretrained_batch)\n\n        # Every token without representation has to be brought into appropriate range\n        batch -= self.pretrained_embedding.num_embeddings\n        # Zero out the ones which already have pretrained embedding\n        batch[~mask] = 0\n        non_pretrained_embedded_batch = self.trainable_embedding(batch)\n\n        # And finally change appropriate tokens from placeholder embedding created by\n        # pretrained into trainable embeddings.\n        embedded_batch[mask] = non_pretrained_embedded_batch[mask]\n\n        # Rest of your code\n        ...\n
\n

Let's say your pretrained indices are in the range [0, 300], while those without representation are [301, 500].

\n

2. Zero gradients for specified tokens.

\n

This one is a bit tricky, but I think it's pretty concise and easy to implement. So, if you obtain the indices of tokens which got no GloVe representation, you can explicitly zero their gradient after backprop, so those rows will not get updated.

\n
import torch\n\nembedding = torch.nn.Embedding(10, 3)\nX = torch.LongTensor([[1, 2, 4, 5], [4, 3, 2, 9]])\n\nvalues = embedding(X)\nloss = values.mean()\n\n# Use whatever loss you want\nloss.backward()\n\n# Let's say those indices in your embedding are pretrained (have GloVe representation)\nindices = torch.LongTensor([2, 4, 5])\n\nprint("Before zeroing out gradient")\nprint(embedding.weight.grad)\n\nprint("After zeroing out gradient")\nembedding.weight.grad[indices] = 0\nprint(embedding.weight.grad)\n
\n

And the output of the second approach:

\n
Before zeroing out gradient\ntensor([[0.0000, 0.0000, 0.0000],\n        [0.0417, 0.0417, 0.0417],\n        [0.0833, 0.0833, 0.0833],\n        [0.0417, 0.0417, 0.0417],\n        [0.0833, 0.0833, 0.0833],\n        [0.0417, 0.0417, 0.0417],\n        [0.0000, 0.0000, 0.0000],\n        [0.0000, 0.0000, 0.0000],\n        [0.0000, 0.0000, 0.0000],\n        [0.0417, 0.0417, 0.0417]])\nAfter zeroing out gradient\ntensor([[0.0000, 0.0000, 0.0000],\n        [0.0417, 0.0417, 0.0417],\n        [0.0000, 0.0000, 0.0000],\n        [0.0417, 0.0417, 0.0417],\n        [0.0000, 0.0000, 0.0000],\n        [0.0000, 0.0000, 0.0000],\n        [0.0000, 0.0000, 0.0000],\n        [0.0000, 0.0000, 0.0000],\n        [0.0000, 0.0000, 0.0000],\n        [0.0417, 0.0417, 0.0417]])\n
\n", "system": "" }, { "instruction": "RuntimeError: Error(s) in loading state_dict for ResNet:", "input": "", "output": "

I was using Pytorch 0.4.1 but Jupyter Notebook which I loaded uses 0.4.0. So I added strict=False attribute to load_state_dict().

\n
model.load_state_dict(checkpoint['state_dict'], strict=False)\n
\n", "system": "" }, { "instruction": "Problem with missing and unexpected keys while loading my model in Pytorch", "input": "", "output": "

So your Network is essentially the classifier part of AlexNet and you're looking to load pretrained AlexNet weights into it. The problem is that the keys in state_dict are \"fully qualified\", which means that if you look at your network as a tree of nested modules, a key is just a list of modules in each branch, joined with dots like grandparent.parent.child. You want to

\n\n
    \n
  1. Keep only the tensors with name starting with \"classifier.\"
  2. \n
  3. Remove the \"classifier.\" part of keys
  4. \n
\n\n

so try

\n\n
model = Network()\nloaded_dict = checkpoint['model_state_dict']\nprefix = 'classifier.'\nn_clip = len(prefix)\nadapted_dict = {k[n_clip:]: v for k, v in loaded_dict.items()\n                if k.startswith(prefix)}\nmodel.load_state_dict(adapted_dict)\n
\n", "system": "" }, { "instruction": "Size mismatch for fc.bias and fc.weight in PyTorch", "input": "", "output": "

Cause:

\n\n

You trained a model derived from resnet18 in this way:

\n\n
model_ft = models.resnet18(pretrained=True)\nnum_ftrs = model_ft.fc.in_features\nmodel_ft.fc = nn.Linear(num_ftrs, 4)\n
\n\n

That is, you changed the last nn.Linear layer to output 4 dim prediction instead of the default 1000.
\nWhen you try and load the model for prediction, your code is:

\n\n
model = resnet18(pretrained=True)    \nmodel.load_state_dict(checkpoint)\n
\n\n

You did not apply the same change of the last nn.Linear layer to model therefore the checkpoint you are trying to load does not fit.

\n\n

Fix:

\n\n

(1) Apply the same change before loading the checkpoint:

\n\n
model = resnet18(pretrained=True)    \nnum_ftrs = model_ft.fc.in_features\nmodel_ft.fc = nn.Linear(num_ftrs, 4)  # make the change\nmodel.load_state_dict(checkpoint)  # load\n
\n\n

(2) Even better, use num_classes argument to construct resnet with the desired number of outputs to begin with:

\n\n
model = resnet18(pretrained=True, num_classes=4)  \nmodel.load_state_dict(checkpoint)  # load\n
\n", "system": "" }, { "instruction": "RuntimeError: "exp" not implemented for 'torch.LongTensor'", "input": "", "output": "

I happened to follow this tutorial too.

\n\n

For me I just got the torch.arange to generate float type tensor

\n\n

from

\n\n
position = torch.arange(0, max_len).unsqueeze(1)\ndiv_term = torch.exp(torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model))\n
\n\n

to

\n\n
position = torch.arange(0., max_len).unsqueeze(1)\ndiv_term = torch.exp(torch.arange(0., d_model, 2) * -(math.log(10000.0) / d_model))\n
\n\n

Just a simple fix. But now it works for me. It is possible that the torch exp and sin previously support LongTensor but not anymore (not very sure about it).

\n", "system": "" }, { "instruction": "Is there any way I can download the pre-trained models available in PyTorch to a specific path?", "input": "", "output": "

As, @dennlinger mentioned in his answer : torch.utils.model_zoo, is being internally called when you load a pre-trained model.

\n\n

More specifically, the method: torch.utils.model_zoo.load_url() is being called every time a pre-trained model is loaded. The documentation for the same, mentions:

\n\n
\n

The default value of model_dir is $TORCH_HOME/models where\n $TORCH_HOME defaults to ~/.torch.

\n \n

The default directory can be overridden with the $TORCH_HOME\n environment variable.

\n
\n\n

This can be done as follows:

\n\n
import torch \nimport torchvision\nimport os\n\n# Suppose you are trying to load pre-trained resnet model in directory- models\\resnet\n\nos.environ['TORCH_HOME'] = 'models\\\\resnet' #setting the environment variable\nresnet = torchvision.models.resnet18(pretrained=True)\n
\n\n

I came across the above solution by raising an issue in the PyTorch's GitHub repository:\nhttps://github.com/pytorch/vision/issues/616

\n\n

This led to an improvement in the documentation i.e. the solution mentioned above.

\n", "system": "" }, { "instruction": "PyTorch autograd -- grad can be implicitly created only for scalar outputs", "input": "", "output": "

I changed my basic_fun to the following, which resolved my problem:

\n\n
def basic_fun(x_cloned):\n    res = torch.FloatTensor([0])\n    for i in range(len(x)):\n        res += x_cloned[i] * x_cloned[i]\n    return res\n
\n\n

This version returns a scalar value.

\n", "system": "" }, { "instruction": "TypeError: tensor is not a torch image", "input": "", "output": "

The problem is with the order of the transforms. The ToTensor transform should come before the Normalize transform, since the latter expects a tensor, but the Resize transform returns an image. Correct code with the faulty lines changed:

\n\n
train_transforms = transforms.Compose([\n    transforms.Resize((224,224)), \n    transforms.ToTensor(), \n    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])\ntest_transforms = transforms.Compose([\n    transforms.Resize((224,224)), \n    transforms.ToTensor(), \n    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])\n
\n", "system": "" }, { "instruction": "If I'm not specifying to use CPU/GPU, which one is my script using?", "input": "", "output": "

PyTorch defaults to the CPU, unless you use the .cuda() methods on your models and the torch.cuda.XTensor variants of PyTorch's tensors.

\n", "system": "" }, { "instruction": "Pytorch What's the difference between define layer in __init__() and directly use in forward()?", "input": "", "output": "

Everything which contains weights which you want to be trained during the training process should be defined in your __init__ method.

\n\n

You don't need do define activation functions like softmax, ReLU or sigmoid in your __init__, you can just call them in forward.

\n\n

Dropout layers for example also don't need to be defined in __init__, they can just be called in forward too. [However defining them in your __init__ has the advantage that they can be switched off easier during evaluation (by calling eval() on your model). You can see an example of both versions here.

\n\n

Hope this is clear. Just ask if you have any further questions.

\n", "system": "" }, { "instruction": "what is the default weight initializer for conv in pytorch?", "input": "", "output": "

Each pytorch layer implements the method reset_parameters which is called at the end of the layer initialization to initialize the weights.\nYou can find the implementation of the layers here.

\n\n

For the dense layer which in pytorch is called linear for example, weights are initialized uniformly

\n\n
stdv = 1. / math.sqrt(self.weight.size(1))\nself.weight.data.uniform_(-stdv, stdv)\n
\n\n

where self.weight.size(1) is the number of inputs. This is done to keep the variance of the distributions of each layer relatively similar at the beginning of training by normalizing it to one. You can read a more detailed explanation here.

\n\n

For the convolutional layer the initialization is basically the same. You just compute the number of inputs by multiplying the number of channels with the kernel size.

\n", "system": "" }, { "instruction": "pytorch error: multi-target not supported in CrossEntropyLoss()", "input": "", "output": "

Ok. So I reproduced your problem and after some search and reading the API of CrossEntropyLoss(), I have found it's because you have a wrong label dimension.

\n\n

Offical docs of CrossEntropyLoss here. And you can see

\n\n
\n

Input: (N,C) where C = number of classes
\n Target: (N) where each value is 0\u2264targets[i]\u2264C\u22121

\n
\n\n

While here, in your criterion() function, you have a batchSize x 7 input and batchSize x 1 label. The confusing point is, say your batchSize is 10, a 10x1 tensor can not be regarded as a size-10 tensor, which is what the loss function expectes. You must explictly do the size conversion.

\n\n

Solution:
\nAdd labels = labels.squeeze_() before you call loss = criterion(y_pred, labels) and do the same thing in your test code. The squeeze_() funciton removes size-1 dimensions inplace. So you have a batchSize-size label now.

\n", "system": "" }, { "instruction": "How to convert caffe prototxt to pytorch model?", "input": "", "output": "

You can use one of the following libraries:

\n\n
\n

Usage

\n

Conversion

\n
python caffe2pth_convertor.py \\\n--prototxt=YOUT_PROTOTXT_PATH \\\n--caffemodel=YOUT_CAFFEMODEL_PATH \\\n--pthmodel=OUTPUT_PTHMODEL_PATH\n
\n

Use the model in Pytorch

\n
from caffe2pth.caffenet import *\n\nnet = CaffeNet(YOUT_PROTOTXT_PATH)\nnet.load_state_dict(torch.load(OUTPUT_PTHMODEL_PATH))\n
\n
\n", "system": "" }, { "instruction": "Tensorflow: Hierarchical Softmax Implementation", "input": "", "output": "

Practically if your total number of categories is in the range of hundreds to thousands (less than 50K), you don't need to consider using hierarchical softmax, which is designed to run training faster for classifying into millions of categories (for example, the number of words in a vocabulary).

\n\n

In my experience (with Naive Bayesian and neural networks), utilizing the hierarchical structure at training time does not necessarily improve your classification quality.

\n\n

However, if you are interested to implement Hierarchical Softmax anyway, that's another story.

\n", "system": "" }, { "instruction": "FBGEMM load error trying to use PyTorch on Windows", "input": "", "output": "

Uninstalled all:

\n
pip uninstall torch torchvision torchaudio\n
\n

Then installed the one-step older version:

\n
pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0\n
\n

Now, it's working fine.

\n

Initially, I got this problem after the installation of torchvision, torch was working fine before that.

\n", "system": "" }, { "instruction": "CUDA Out of memory when there is plenty available", "input": "", "output": "

You need empty torch cache after some method(before error)

\n
torch.cuda.empty_cache()\n
\n", "system": "" }, { "instruction": "How to install pytorch with CUDA support with pip in Visual Studio", "input": "", "output": "

I followed the main instructions for installing on pytorch's site but still failed to have success. For my setup this resulted in pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117.

\n

I had to add --upgrade --force-reinstall which finally fixed the problem.

\n", "system": "" }, { "instruction": "What is the difference between cuda.amp and model.half()?", "input": "", "output": "

If you convert the entire model to fp16, there is a chance that some of the activations functions and batchnorm layers will cause the fp16 weights to underflow, i.e., become zero. So it is always recommended to use autocast which internally converts the weights to fp32 in problematic layers.

\n

model.half() in the end will save weight in fp16 where as autocast weights will be still in fp32. Training in fp16 will be faster than autocast but higher chance for instability if you are not careful.\nWhile using autocast you also need to scale up the gradient during the back propagation

\n

If fp16 requirement is on the inference side, I recommend using autocast and then converting to fp16 using ONNX and tensorrt.

\n", "system": "" }, { "instruction": "GPU is not available for Pytorch", "input": "", "output": "\n", "system": "" }, { "instruction": "How to select half precision (BFLOAT16 vs FLOAT16) for your trained model?", "input": "", "output": "

bfloat16 is generally easier to use, because it works as a drop-in replacement for float32. If your code doesn't create nan/inf numbers or turn a non-0 into a 0 with float32, then it shouldn't do it with bfloat16 either, roughly speaking. So, if your hardware supports it, I'd pick that.

\n

Check out AMP if you choose float16.

\n", "system": "" }, { "instruction": "Using weights in CrossEntropyLoss and BCELoss (PyTorch)", "input": "", "output": "

Another way you could accomplish your goal is to use reduction=none when initializing the loss and then multiply the resulting tensor by your weights before computing the mean.\ne.g.

\n
loss = torch.nn.BCELoss(reduction='none')\nmodel = torch.sigmoid\n\nweights = torch.rand(10,1)\ninputs = torch.rand(10,1)\ntargets = torch.rand(10,1)\n\nintermediate_losses = loss(model(inputs), targets)\nfinal_loss = torch.mean(weights*intermediate_losses)\n
\n

Of course for your scenario you still would need to calculate the weights tensor. But hopefully this helps!

\n", "system": "" }, { "instruction": "How to conda install CUDA enabled PyTorch in a Docker container?", "input": "", "output": "

I got it working after many, many tries. Posting the answer here in case it helps anyone.

\n

Basically, I installed pytorch and torchvision through pip (from within the conda environment) and rest of the dependencies through conda as usual.

\n

This is how the final Dockerfile looks:

\n
# Use nvidia/cuda image\nFROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04\n\n# set bash as current shell\nRUN chsh -s /bin/bash\nSHELL ["/bin/bash", "-c"]\n\n# install anaconda\nRUN apt-get update\nRUN apt-get install -y wget bzip2 ca-certificates libglib2.0-0 libxext6 libsm6 libxrender1 git mercurial subversion && \\\n        apt-get clean\nRUN wget --quiet https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh -O ~/anaconda.sh && \\\n        /bin/bash ~/anaconda.sh -b -p /opt/conda && \\\n        rm ~/anaconda.sh && \\\n        ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \\\n        echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \\\n        find /opt/conda/ -follow -type f -name '*.a' -delete && \\\n        find /opt/conda/ -follow -type f -name '*.js.map' -delete && \\\n        /opt/conda/bin/conda clean -afy\n\n# set path to conda\nENV PATH /opt/conda/bin:$PATH\n\n\n# setup conda virtual environment\nCOPY ./requirements.yaml /tmp/requirements.yaml\nRUN conda update conda \\\n    && conda env create --name camera-seg -f /tmp/requirements.yaml\n\nRUN echo "conda activate camera-seg" >> ~/.bashrc\nENV PATH /opt/conda/envs/camera-seg/bin:$PATH\nENV CONDA_DEFAULT_ENV $camera-seg\n
\n

And this is how the requirements.yaml looks like:

\n
name: camera-seg\nchannels:\n  - defaults\n  - conda-forge\ndependencies:\n  - python=3.6\n  - pip\n  - numpy\n  - pillow\n  - yaml\n  - pyyaml\n  - matplotlib\n  - jupyter\n  - notebook\n  - tensorboardx\n  - tensorboard\n  - protobuf\n  - tqdm\n  - pip:\n    - torch\n    - torchvision\n
\n

Then I build the container using the command docker build -t camera-seg . and PyTorch is now being able to recognize CUDA.

\n", "system": "" }, { "instruction": "What is the difference between an Embedding Layer with a bias immediately afterwards and a Linear Layer in PyTorch", "input": "", "output": "

Embedding

\n
\n

[...] what Embedding does differently than Linear without a bias.

\n
\n

Essentially everything. torch.nn.Embedding is a lookup table; it works the same as torch.Tensor but with a few twists (like possibility to use sparse embedding or default value at specified index).

\n

For example:

\n
import torch\n\nembedding = torch.nn.Embedding(3, 4)\n\nprint(embedding.weight)\n\nprint(embedding(torch.tensor([1])))\n
\n

Would output:

\n
Parameter containing:\ntensor([[ 0.1420, -0.1886,  0.6524,  0.3079],\n        [ 0.2620,  0.4661,  0.7936, -1.6946],\n        [ 0.0931,  0.3512,  0.3210, -0.5828]], requires_grad=True)\ntensor([[ 0.2620,  0.4661,  0.7936, -1.6946]], grad_fn=<EmbeddingBackward>)\n
\n

So we took the first row of the embedding. It does nothing more than that.

\n

Where is it used?

\n

Usually when we want to encode some meaning (like word2vec) for each row (e.g. words being close semantically are close in euclidean space) and possibly train them.

\n

Linear

\n

torch.nn.Linear (without bias) is also a torch.Tensor (weight) but it does operation on it (and the input) which is essentially:

\n
output = input.matmul(weight.t())\n
\n

every time you call the layer (see source code and functional definition of this layer).

\n

Code snippet

\n

The layer in your code snippet does this:

\n\n

This is probably used to train both representations (of users and movies) for some recommender-like system.

\n

Other stuff

\n
\n

I know it does some faster computational version of a dot product\nwhere one of the matrices is a one-hot encoded matrix and the other is\nthe embedding matrix.

\n
\n

No, it doesn't. torch.nn.Embedding can be one hot encoded and might also be sparse, but depending on the algorithms (and whether those support sparsity) there might be performance boost or not.

\n", "system": "" }, { "instruction": "How can I invert a MelSpectrogram with torchaudio and get an audio waveform?", "input": "", "output": "

Just for history, full code:

\n
import torch\nimport torchaudio\nimport IPython\n\nwaveform, sample_rate = torchaudio.load("wavs/LJ030-0196.wav", normalize=True)\n\nn_fft = 256\n\nn_stft = int((n_fft//2) + 1)\ntransofrm = torchaudio.transforms.MelSpectrogram(sample_rate, n_fft=n_fft)\ninvers_transform = torchaudio.transforms.InverseMelScale(sample_rate=sample_rate, n_stft=n_stft)\ngrifflim_transform = torchaudio.transforms.GriffinLim(n_fft=n_fft)\n\nmel_specgram = transofrm(waveform)\ninverse_waveform = invers_transform(mel_specgram)\npseudo_waveform = grifflim_transform(inverse_waveform)\n
\n

And

\n
IPython.display.Audio(waveform.numpy(), rate=sample_rate)\n
\n
IPython.display.Audio(pseudo_waveform.numpy(), rate=sample_rate)\n
\n", "system": "" }, { "instruction": "Pytorch RuntimeError: expected scalar type Float but found Byte", "input": "", "output": "

This line is the cause of your error:

\n
images = self.data.iloc[idx, 1:-1].values.astype(np.uint8).reshape((1, 16, 16))\n
\n

images are uint8 (byte) while the neural network needs inputs as floating point in order to calculate gradients (you can't calculate gradients for backprop using integers as those are not continuous and non-differentiable).

\n

You can use torchvision.transforms.functional.to_tensor to convert the image into float and into [0, 1] like this:

\n
import torchvision\n\nimages = torchvision.transforms.functional.to_tensor(\n    self.data.iloc[idx, 1:-1].values.astype(np.uint8).reshape((1, 16, 16))\n)\n
\n

or simply divide by 255 to get values into [0, 1].

\n", "system": "" }, { "instruction": "PyTorch multiprocessing error with Hogwild", "input": "", "output": "

If you modify your code to create new processes like this:

\n
processes = []\nctx = mp.get_context('spawn')\nfor rank in range(num_processes):\n    p = ctx.Process(target=train, args=(model,))\n
\n

it seems to run fine (rest of code same as yours, tested on pytorch 1.5.0 / python 3.6 / NVIDIA T4 GPU).

\n

I'm not completely sure what is carried over from the non-parallel run to the parallel run; I tried creating a completely new model for the two runs (with its own class), and/or deleting anything from the original, and/or making sure to delete any tensors and free up memory, and none of that made any difference.

\n

What did make a difference was making sure that .backward() never got called outside of mp.Process() before it was called by a function within mp.Process(). I think what may be carried over is an autograd thread; if the thread exists before multiprocessing with the default fork method it fails, if the thread is created after fork it seems to work okay, and if using spawn it also works okay.

\n

Btw: That's a really interesting question - thank you especially for digesting it to a minimal example!

\n", "system": "" }, { "instruction": "What is the param `last_epoch` on Pytorch Optimizer's Schedulers is for?", "input": "", "output": "

The last_epoch parameter is used when resuming training and you want to start the scheduler where it left off earlier. Its value is increased every time you call .step() of scheduler. The default value of -1 indicates that the scheduler is started from the beginning.

\n

From the docs:

\n
\n

Since step() should be invoked after each batch instead of after each epoch, this number represents the total number of batches computed, not the total number of epochs computed. When last_epoch=-1, the schedule is started from the beginning.

\n
\n

For example,

\n
>>> import torch\n>>> cc = torch.nn.Conv2d(10,10,3)\n>>> myoptimizer = torch.optim.Adam(cc.parameters(), lr=0.1)\n>>> myscheduler = torch.optim.lr_scheduler.StepLR(myoptimizer,step_size=1, gamma=0.1)\n>>> myscheduler.last_epoch, myscheduler.get_lr()\n(0, [0.1])\n>>> myscheduler.step()\n>>> myscheduler.last_epoch, myscheduler.get_lr()\n(1, [0.001])\n>>> myscheduler.step()\n>>> myscheduler.last_epoch, myscheduler.get_lr()\n(2, [0.0001])\n
\n

Now, if you decide to stop the training in the middle, then resume it, you can provide last_epoch parameter to schedular so that it start from where it was left off, not from the beginning again.

\n
>>> mynewscheduler = torch.optim.lr_scheduler.StepLR(myoptimizer,step_size=1, gamma=0.1, last_epoch=myscheduler.last_epoch)\n>>> mynewscheduler.last_epoch, mynewscheduler.get_lr()\n(3, [1.0000000000000004e-05])\n
\n", "system": "" }, { "instruction": "Which PyTorch version is CUDA compute capability 3.0 compatible?", "input": "", "output": "

PyTorch 0.3.0 was the last version to support CUDA compute capability 3.0.

\n

Alternatively, you could compile PyTorch from source (which may still support this, I haven't verified).

\n", "system": "" }, { "instruction": "Understanding input shape to PyTorch conv1D?", "input": "", "output": "

In pytorch your input shape of [6, 512, 768] should actually be [6, 768, 512] where the feature length is represented by the channel dimension and sequence length is the length dimension. Then you can define your conv1d with in/out channels of 768 and 100 respectively to get an output of [6, 100, 511].

\n\n

Given an input of shape [6, 512, 768] you can convert it to the correct shape with Tensor.transpose.

\n\n
input = input.transpose(1, 2).contiguous()\n
\n\n

The .contiguous() ensures the memory of the tensor is stored contiguously which helps avoid potential issues during processing.

\n", "system": "" }, { "instruction": "What are saved in optimizer's state_dict? what "state","param_groups" stands for?", "input": "", "output": "

In contrast to model's state_dict, which saves learnable parameters, the optimizer's state_dict contains information about the optimizer\u2019s state (parameters to be optimized), as well as the hyperparameters used.

\n\n
\n\n

All optimizers in PyTorch need to inherit from the base class torch.optim.Optimizer. It requires two entries:

\n\n\n\n

In addition to that, optimizers also support specifying per-parameter options.

\n\n
\n

To do this, instead of passing an iterable of Tensors, pass in an iterable of dicts. Each of them will define a separate parameter group, and should contain a params key, containing a list of parameters belonging to it.

\n
\n\n

Consider an example,

\n\n
optim.SGD([\n                {'params': model.base.parameters()},\n                {'params': model.classifier.parameters(), 'lr': 1e-3}\n            ], lr=1e-2, momentum=0.9)\n
\n\n

Here, we have provided the a) params, b) default hyperparameters: lr, momentum, and c) a parameter group. In this case, the model.base\u2019s parameters will use the default learning rate of 1e-2, model.classifier\u2019s parameters will use a learning rate of 1e-3, and a momentum of 0.9 will be used for all parameters.

\n\n
\n\n

The step (optimizer.step()) performs a single optimization step (parameter update), which changes the state of the optimizer.

\n\n
\n\n

Now, coming to optimizer's state_dict, it returns the state of the optimizer as a dict. It contains two entries:

\n\n\n\n
\n\n

Some of the hyperparameters are specific to the optimizer or model used e.g. (used in Adam)

\n\n\n", "system": "" }, { "instruction": "Implementation of the Dense Synthesizer", "input": "", "output": "

Is the implementation and understanding of the dense synthesizer correct?

\n

Not exactly, linear1 = nn.Linear(d,d) according to the paper and not (d,l).\nOf course this does not work if X.shape = (l,d) according to matrix multiplication rules.

\n

This is because :

\n

\"enter\n\"enter

\n

So F is applied to each Xi in X for i in [1,l]

\n

The resulting matrix B is then passed to the softmax function and multiplied by G(x).\nSo you'd have to modify your code to sequentially process the input then use the returned matrix to compute Y.

\n

how is that different from a multi-layered perceptron that takes in two different inputs and makes uses of it at different point in the forward propagation?

\n

To understand, we need to put things into context, the idea of introducing attention mechanism was first described here in the context of Encoder - Decoder : https://arxiv.org/pdf/1409.0473.pdf

\n

The core idea is to allow the model to have control over how the context vector from the encoder is retrieved using a neural network instead of relying solely on the last encoded state :

\n

\"enter

\n

see this post for more detail.

\n

The Transformers introduced the idea of using "Multi-Head Attention" (see graph below) to reduce the computational burden and focus solely on the attention mechanism itself. post

\n

https://arxiv.org/pdf/1706.03762.pdf

\n

\"enter

\n

\"enter

\n

So where does the Dense synthesizer fits into all of that ?

\n

It simply replaces the Dot product (as illustrated in the first pictures in your post) by F(.). If you replace what's inside the softmax by F you get the equation for Y

\n

\"enter

\n

Conclusion

\n

This is an MLP but applied step wise to the input in the context of sequence processing.

\n

Thank you

\n", "system": "" }, { "instruction": "PyTorch slice matrix with vector", "input": "", "output": "

You can specify the corresponding row index as:

\n\n
import torch\nx = torch.tensor([[1, 2, 3],\n                  [4, 5, 6],\n                  [7, 8, 9]])\n\ny = torch.tensor([0, 2, 1])\n\nx[range(x.shape[0]), y]\ntensor([1, 6, 8])\n
\n", "system": "" }, { "instruction": "Pytorch "NCCL error": unhandled system error, NCCL version 2.4.8"", "input": "", "output": "

A number of things can cause this issue, see for example 1, 2. Adding the line

\n
import os\nos.environ["NCCL_DEBUG"] = "INFO"\n
\n

to your script will log more specific debug info leading up to the error, giving you a more helpful error message to google.

\n", "system": "" }, { "instruction": "Pytorch - Concatenating Datasets before using Dataloader", "input": "", "output": "

If I got your question right, you have train and dev sets (and their corresponding loaders) as follows:

\n
train_set = CustomDataset(...)\ntrain_loader = DataLoader(dataset=train_set, ...)\ndev_set = CustomDataset(...)\ndev_loader = DataLoader(dataset=dev_set, ...)\n
\n

And you want to concatenate them in order to use train+dev as the training data, right? If so, you just simply call:

\n
train_dev_sets = torch.utils.data.ConcatDataset([train_set, dev_set])\ntrain_dev_loader = DataLoader(dataset=train_dev_sets, ...)\n
\n

The train_dev_loader is the loader containing data from both sets.

\n

Now, be sure your data has the same shapes and the same types, that is, the same number of features, or the same categories/numbers, etc.

\n", "system": "" }, { "instruction": "How do I add some Gaussian noise to a tensor in PyTorch?", "input": "", "output": "

The function torch.randn produces a tensor with elements drawn from a Gaussian distribution of zero mean and unit variance. Multiply by sqrt(0.1) to have the desired variance.

\n\n
x = torch.zeros(5, 10, 20, dtype=torch.float64)\nx = x + (0.1**0.5)*torch.randn(5, 10, 20)\n
\n", "system": "" }, { "instruction": "How to resize a PyTorch tensor?", "input": "", "output": "

It seems like you are looking for interpolate (a function in nn.functional):

\n
import torch.nn.functional as nnf\n\nx = torch.rand(5, 1, 44, 44)\nout = nnf.interpolate(x, size=(224, 224), mode='bicubic', align_corners=False)\n
\n

If you really care about the accuracy of the interpolation, you should have a look at ResizeRight: a pytorch/numpy package that accurately deals with all sorts of "edge cases" when resizing images. This can have an effect when directly merging features of different scales: inaccurate interpolation may result in misalignments.

\n", "system": "" }, { "instruction": "PyTorch: Dataloader for time series task", "input": "", "output": "

I ended up writing custom dataset as well, though it's a bit different from the answer above:

\n
class TimeseriesDataset(torch.utils.data.Dataset):   \n    def __init__(self, X, y, seq_len=1):\n        self.X = X\n        self.y = y\n        self.seq_len = seq_len\n\n    def __len__(self):\n        return self.X.__len__() - (self.seq_len-1)\n\n    def __getitem__(self, index):\n        return (self.X[index:index+self.seq_len], self.y[index+self.seq_len-1])\n
\n

And the usage looks like that:

\n
train_dataset = TimeseriesDataset(X_lstm, y_lstm, seq_len=4)\ntrain_loader = torch.utils.data.DataLoader(train_dataset, batch_size = 3, shuffle = False)\n\nfor i, d in enumerate(train_loader):\n    print(i, d[0].shape, d[1].shape)\n\n>>>\n# shape: tuple((batch_size, seq_len, n_features), (batch_size))\n0 torch.Size([3, 4, 2]) torch.Size([3])\n
\n", "system": "" }, { "instruction": "How to get the filename of a sample from a DataLoader?", "input": "", "output": "

Well, it depends on how your Dataset is implemented. For instance, in the torchvision.datasets.MNIST(...) case, you cannot retrieve the filename simply because there is no such thing as the filename of a single sample (MNIST samples are loaded in a different way).

\n\n

As you did not show your Dataset implementation, I'll tell you how this could be done with the torchvision.datasets.ImageFolder(...) (or any torchvision.datasets.DatasetFolder(...)):

\n\n
f = open(\"test_y\", \"w\")\nwith torch.no_grad():\n    for i, (images, labels) in enumerate(test_loader, 0):\n        outputs = model(images)\n        _, predicted = torch.max(outputs.data, 1)\n        sample_fname, _ = test_loader.dataset.samples[i]\n        f.write(\"{}, {}\\n\".format(sample_fname, predicted.item()))\nf.close()\n
\n\n

You can see that the path of the file is retrieved during the __getitem__(self, index), especifically here.

\n\n

If you implemented your own Dataset (and perhaps would like to support shuffle and batch_size > 1), then I would return the sample_fname on the __getitem__(...) call and do something like this:

\n\n
for i, (images, labels, sample_fname) in enumerate(test_loader, 0):\n    # [...]\n
\n\n

This way you wouldn't need to care about shuffle. And if the batch_size is greater than 1, you would need to change the content of the loop for something more generic, e.g.:

\n\n
f = open(\"test_y\", \"w\")\nfor i, (images, labels, samples_fname) in enumerate(test_loader, 0):\n    outputs = model(images)\n    pred = torch.max(outputs, 1)[1]\n    f.write(\"\\n\".join([\n        \", \".join(x)\n        for x in zip(map(str, pred.cpu().tolist()), samples_fname)\n    ]) + \"\\n\")\nf.close()\n
\n", "system": "" }, { "instruction": "Pytorch custom activation functions?", "input": "", "output": "

There are four possibilities depending on what you are looking for. You will need to ask yourself two questions:

\n

Q1) Will your activation function have learnable parameters?

\n

If yes, you have no choice but to create your activation function as an nn.Module class because you need to store those weights.

\n

If no, you are free to simply create a normal function, or a class, depending on what is convenient for you.

\n

Q2) Can your activation function be expressed as a combination of existing PyTorch functions?

\n

If yes, you can simply write it as a combination of existing PyTorch function and won't need to create a backward function which defines the gradient.

\n

If no you will need to write the gradient by hand.

\n

Example 1: SiLU function

\n

The SiLU function f(x) = x * sigmoid(x) does not have any learned weights and can be written entirely with existing PyTorch functions, thus you can simply define it as a function:

\n
def silu(x):\n    return x * torch.sigmoid(x)\n
\n

and then simply use it as you would have torch.relu or any other activation function.

\n

Example 2: SiLU with learned slope

\n

In this case you have one learned parameter, the slope, thus you need to make a class of it.

\n
class LearnedSiLU(nn.Module):\n    def __init__(self, slope = 1):\n        super().__init__()\n        self.slope = slope * torch.nn.Parameter(torch.ones(1))\n\n    def forward(self, x):\n        return self.slope * x * torch.sigmoid(x)\n
\n

Example 3: with backward

\n

If you have something for which you need to create your own gradient function, you can look at this example: Pytorch: define custom function

\n", "system": "" }, { "instruction": "Why does dim=1 return row indices in torch.argmax?", "input": "", "output": "

It's time to correctly understand how the axis or dim argument work in PyTorch:

\n\n

\"tensor

\n\n
\n\n

The following example should make sense once you comprehend the above picture:

\n\n
\n
    |\n    v\n  dim-0  ---> -----> dim-1 ------> -----> --------> dim-1\n    |   [[-1.7739,  0.8073,  0.0472, -0.4084],\n    v    [ 0.6378,  0.6575, -1.2970, -0.0625],\n    |    [ 1.7970, -1.3463,  0.9011, -0.8704],\n    v    [ 1.5639,  0.7123,  0.0385,  1.8410]]\n    |\n    v\n
\n
\n\n
# argmax (indices where max values are present) along dimension-1\nIn [215]: torch.argmax(a, dim=1)\nOut[215]: tensor([1, 1, 0, 3])\n
\n\n
\n\n

Note: dim (short for 'dimension') is the torch equivalent of 'axis' in NumPy.

\n", "system": "" }, { "instruction": "torch.nn.sequential vs. combination of multiple torch.nn.linear", "input": "", "output": "

Yes, these two pieces of code create the same network.\nOne way to convince yourself that this is true is to save both models to ONNX.

\n\n
import torch.nn as nn\n\nclass TestModel(nn.Module):\n    def __init__(self, input_dim, hidden_dim, output_dim):\n        super(TestModel, self).__init__()\n        self.fc1 = nn.Linear(input_dim,hidden_dim)\n        self.fc2 = nn.Linear(hidden_dim,output_dim)\n\n    def forward(self, x):\n        x = nn.functional.relu(self.fc1(x))\n        x = nn.functional.softmax(self.fc2(x))\n        return x   \n\n\nclass TestModel2(nn.Module):\n    def __init__(self, input_dim, hidden_dim, output_dim):\n        super(TestModel2, self).__init__()\n        self.seq = nn.Sequential(\n                nn.Linear(input_dim, hidden_dim),\n                nn.ReLU(),\n                nn.Linear(hidden_dim, output_dim),\n                nn.Softmax()\n                )\n\n    def forward(self, x):\n        return self.seq(x)\n\nm = TestModel(1, 2, 3)\nm2 = TestModel2(1, 2, 3)\n\ntorch.onnx.export(m, torch.Tensor([0]), \"test.onnx\", verbose=True)\n/opt/anaconda3/envs/py36/bin/ipython:9: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.\ngraph(%0 : Float(1)\n      %1 : Float(2, 1)\n      %2 : Float(2)\n      %3 : Float(3, 2)\n      %4 : Float(3)) {\n  %5 : Float(1!, 2) = onnx::Transpose[perm=[1, 0]](%1), scope: TestModel/Linear[fc1]\n  %6 : Float(2) = onnx::MatMul(%0, %5), scope: TestModel/Linear[fc1]\n  %7 : Float(2) = onnx::Add(%6, %2), scope: TestModel/Linear[fc1]\n  %8 : Float(2) = onnx::Relu(%7), scope: TestModel\n  %9 : Float(2!, 3!) = onnx::Transpose[perm=[1, 0]](%3), scope: TestModel/Linear[fc2]\n  %10 : Float(3) = onnx::MatMul(%8, %9), scope: TestModel/Linear[fc2]\n  %11 : Float(3) = onnx::Add(%10, %4), scope: TestModel/Linear[fc2]\n  %12 : Float(3) = onnx::Softmax[axis=0](%11), scope: TestModel\n  return (%12);\n}\n\ntorch.onnx.export(m2, torch.Tensor([0]), \"test.onnx\", verbose=True)\n/opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py:475: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.\n  result = self._slow_forward(*input, **kwargs)\ngraph(%0 : Float(1)\n      %1 : Float(2, 1)\n      %2 : Float(2)\n      %3 : Float(3, 2)\n      %4 : Float(3)) {\n  %5 : Float(1!, 2) = onnx::Transpose[perm=[1, 0]](%1), scope: TestModel2/Sequential[seq]/Linear[0]\n  %6 : Float(2) = onnx::MatMul(%0, %5), scope: TestModel2/Sequential[seq]/Linear[0]\n  %7 : Float(2) = onnx::Add(%6, %2), scope: TestModel2/Sequential[seq]/Linear[0]\n  %8 : Float(2) = onnx::Relu(%7), scope: TestModel2/Sequential[seq]/ReLU[1]\n  %9 : Float(2!, 3!) = onnx::Transpose[perm=[1, 0]](%3), scope: TestModel2/Sequential[seq]/Linear[2]\n  %10 : Float(3) = onnx::MatMul(%8, %9), scope: TestModel2/Sequential[seq]/Linear[2]\n  %11 : Float(3) = onnx::Add(%10, %4), scope: TestModel2/Sequential[seq]/Linear[2]\n  %12 : Float(3) = onnx::Softmax[axis=0](%11), scope: TestModel2/Sequential[seq]/Softmax[3]\n  return (%12);\n}\n
\n\n

So both models result in the same ONNX graph with the same operations.

\n", "system": "" }, { "instruction": "How can I make a neural network that has multiple outputs using pytorch?", "input": "", "output": "

Multiple outputs can be trivially achieved with pytorch.

\n\n

Here is one such network.

\n\n
import torch.nn as nn\n\nclass NeuralNetwork(nn.Module):\n  def __init__(self):\n    super(NeuralNetwork, self).__init__()\n    self.linear1 = nn.Linear(in_features = 3, out_features = 1)\n    self.linear2 = nn.Linear(in_features = 3,out_features = 2)\n\n  def forward(self, x):\n    output1 = self.linear1(x)\n    output2 = self.linear2(x)\n    return output1, output2\n
\n", "system": "" }, { "instruction": "Accessing PyTorch GPU matrix from TensorFlow directly", "input": "", "output": "

I am not familiar with tensorflow, but you may use pyTorch to expose the \"internals\" of a tensor.
\nYou can access the underlying storage of a tensor

\n\n
a.storage()\n
\n\n

Once you have the storage, you can get a pointer to the memory (either CPU or GPU):

\n\n
a.storage().data_ptr()\n
\n\n

You can check if it is pinned or not

\n\n
a.storage().is_pinned()\n
\n\n

And you can pin it

\n\n
a.storage().pin_memory()\n
\n\n

I am not familiar with interfaces between pyTorch and tensorflow, but I came across an example of a package (FAISS) directly accessing pytorch tensors in GPU.

\n", "system": "" }, { "instruction": "AssertionError: Torch not compiled with CUDA enabled", "input": "", "output": "

To summarize and expand on the comments:

\n\n\n\n

This PyTorch github issue mentions that very few Macs have Nvidia processors: https://github.com/pytorch/pytorch/issues/30664

\n\n

IF your Mac does have a CUDA-capable GPU, then to use CUDA commands on MacOS you'll need to recompile pytorch from source with correct command line options.

\n", "system": "" }, { "instruction": "Pytorch speed comparison - GPU slower than CPU", "input": "", "output": "

GPU acceleration works by heavy parallelization of computation. On a GPU you have a huge amount of cores, each of them is not very powerful, but the huge amount of cores here matters.

\n\n

Frameworks like PyTorch do their to make it possible to compute as much as possible in parallel. In general matrix operations are very well suited for parallelization, but still it isn't always possible to parallelize computation!

\n\n

In your example you have a loop:

\n\n
b = torch.ones(4,4).cuda()\nfor _ in range(1000000):\n    b += b\n
\n\n

You have 1000000 operations, but due to the structure of the code it impossible to parallelize much of these computations. If you think about it, to compute the next b you need to know the value of the previous (or current) b.

\n\n

So you have 1000000 operations, but each of these has to be computed one after another. Possible parallelization is limited to the size of your tensor. This size though is not very large in your example:

\n\n
torch.ones(4,4)\n
\n\n

So you only can parallelize 16 operations (additions) per iteration.\nAs the CPU has few, but much more powerful cores, it is just much faster for the given example!

\n\n

But things change if you change the size of the tensor, then PyTorch is able to parallelize much more of the overall computation. I changed the iterations to 1000 (because I did not want to wait so long :), but you can put in any value you like, the relation between CPU and GPU should stay the same.

\n\n

Here are the results for different tensor sizes:

\n\n
#torch.ones(4,4)       - the size you used\nCPU time =  0.00926661491394043\nGPU time =  0.0431208610534668\n\n#torch.ones(40,40)     - CPU gets slower, but still faster than GPU\nCPU time =  0.014729976654052734\nGPU time =  0.04474186897277832\n\n#torch.ones(400,400)   - CPU now much slower than GPU\nCPU time =  0.9702610969543457\nGPU time =  0.04415607452392578\n\n#torch.ones(4000,4000) - GPU much faster then CPU \nCPU time =  38.088677167892456\nGPU time =  0.044649362564086914\n
\n\n

So as you see, where it is possible to parallelize stuff (here the addition of the tensor elements), GPU becomes very powerful.
GPU time is not changing at all for the given calculations, the GPU can handle much more!
(as long as it doesn't run out of memory :)

\n", "system": "" }, { "instruction": "How do I load custom image based datasets into Pytorch for use with a CNN?", "input": "", "output": "

Looking at the data from Kaggle and your code, there are problems in your data loading.

\n\n

The data should be in a different folder per class label for PyTorch ImageFolder to load it correctly. In your case, since all the training data is in the same folder, PyTorch is loading it as one train set. You can correct this by using a folder structure like - train/daisy, train/dandelion, test/daisy, test/dandelion and then passing the train and the test folder to the train and test ImageFolder respectively. Just change the folder structure and you should be good. Take a look at the official documentation of torchvision.datasets.Imagefolder which has a similar example.

\n\n
\n\n

As you said, these images which are already divided by folders in /images. PyTorch ImageFolder assumes that images are organized in the following way. But this folder structure is only correct if you are using all the images for train set:

\n\n
```\n/images/daisy/100080576_f52e8ee070_n.jpg\n/images/daisy/10140303196_b88d3d6cec.jpg\n.\n.\n.\n/images/dandelion/10043234166_e6dd915111_n.jpg\n/images/dandelion/10200780773_c6051a7d71_n.jpg\n```\n
\n\n

where 'daisy', 'dandelion' etc. are class labels.

\n\n

The correct folder structure if you want to split the dataset into train and test set in your case (note that I know you want to split the dataset into train, validation, and test set, but it doesn't matters as this is just an example to get the idea out):

\n\n
```\n/images/train/daisy/100080576_f52e8ee070_n.jpg\n/images/train/daisy/10140303196_b88d3d6cec.jpg\n.\n.\n/images/train/dandelion/10043234166_e6dd915111_n.jpg\n/images/train/dandelion/10200780773_c6051a7d71_n.jpg\n.\n.\n/images/test/daisy/300080576_f52e8ee070_n.jpg\n/images/test/daisy/95140303196_b88d3d6cec.jpg\n.\n.\n/images/test/dandelion/32143234166_e6dd915111_n.jpg\n/images/test/dandelion/65200780773_c6051a7d71_n.jpg\n```\n
\n\n

Then, you can refer to the following full code example on how to write a dataloader:

\n\n
import os\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nimport torch.utils.data as data\nimport torchvision\nfrom torchvision import transforms\n\nEPOCHS = 2\nBATCH_SIZE = 10\nLEARNING_RATE = 0.003\nTRAIN_DATA_PATH = \"./images/train/\"\nTEST_DATA_PATH = \"./images/test/\"\nTRANSFORM_IMG = transforms.Compose([\n    transforms.Resize(256),\n    transforms.CenterCrop(256),\n    transforms.ToTensor(),\n    transforms.Normalize(mean=[0.485, 0.456, 0.406],\n                         std=[0.229, 0.224, 0.225] )\n    ])\n\ntrain_data = torchvision.datasets.ImageFolder(root=TRAIN_DATA_PATH, transform=TRANSFORM_IMG)\ntrain_data_loader = data.DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True,  num_workers=4)\ntest_data = torchvision.datasets.ImageFolder(root=TEST_DATA_PATH, transform=TRANSFORM_IMG)\ntest_data_loader  = data.DataLoader(test_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) \n\nclass CNN(nn.Module):\n    # omitted...\n\nif __name__ == '__main__':\n\n    print(\"Number of train samples: \", len(train_data))\n    print(\"Number of test samples: \", len(test_data))\n    print(\"Detected Classes are: \", train_data.class_to_idx) # classes are detected by folder structure\n\n    model = CNN()    \n    optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)\n    loss_func = nn.CrossEntropyLoss()    \n\n    # Training and Testing\n    for epoch in range(EPOCHS):        \n        for step, (x, y) in enumerate(train_data_loader):\n            b_x = Variable(x)   # batch x (image)\n            b_y = Variable(y)   # batch y (target)\n            output = model(b_x)[0]          \n            loss = loss_func(output, b_y)   \n            optimizer.zero_grad()           \n            loss.backward()                 \n            optimizer.step()\n\n            if step % 50 == 0:\n                test_x = Variable(test_data_loader)\n                test_output, last_layer = model(test_x)\n                pred_y = torch.max(test_output, 1)[1].data.squeeze()\n                accuracy = sum(pred_y == test_y) / float(test_y.size(0))\n                print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy)\n
\n", "system": "" }, { "instruction": "Concatenating two tensors with different dimensions in Pytorch", "input": "", "output": "

You could do the broadcasting manually (using Tensor.expand()) before the concatenation (using torch.cat()):

\n
import torch\n\na = torch.randn(15, 200, 2048)\nb = torch.randn(1, 200, 2048)\n\nrepeat_vals = [a.shape[0] // b.shape[0]] + [-1] * (len(b.shape) - 1)\n# or directly repeat_vals = (15, -1, -1) or (15, 200, 2048) if shapes are known and fixed...\nres = torch.cat((a, b.expand(*repeat_vals)), dim=-1)\nprint(res.shape)\n# torch.Size([15, 200, 4096])\n
\n", "system": "" }, { "instruction": "Given input size: (128x1x1). Calculated output size: (128x0x0). Output size is too small", "input": "", "output": "

Your problem is that before the Pool4 your image has already reduced to a 1x1pixel size image. So you need to either feed an much larger image of size at least around double that (~134x134) or remove a pooling layer in your network.

\n", "system": "" }, { "instruction": "Why detach needs to be called on variable in this example?", "input": "", "output": "

ORIGINAL ANSWER (WRONG / INCOMPLETE)

\n\n

You're right, optimizerD only updates netD and the gradients on netG are not used before netG.zero_grad() is called, so detaching is not necessary, it just saves time, because you're not computing gradients for the generator.

\n\n

You're basically also answering your other question yourself, you don't detach fake in the second block because you specifically want to compute gradients on netG to be able to update its parameters.

\n\n

Note how in the second block real_label is used as the corresponding label for fake, so if the discriminator finds the fake input to be real, the final loss is small, and vice versa, which is precisely what you want for the generator. Not sure if that's what confused you, but it's really the only difference compared to training the discriminator on fake inputs.

\n\n

EDIT

\n\n

Please see FatPanda's comment! My original answer is in fact incorrect. Pytorch destroys (parts of) the compute graph when .backward() is called. Without detaching before errD_fake.backward() the errG.backward() call later would not be able to backprop into the generator because the required graph is no longer available (unless you specify retain_graph=True). I'm relieved Soumith made the same mistake :D

\n", "system": "" }, { "instruction": "PyTorch: Extract learned weights correctly", "input": "", "output": "

Use model.parameters() to get trainable weight for any model or layer. Remember to put it inside list(), or you cannot print it out.

\n\n

The following code snip worked

\n\n
>>> import torch\n>>> import torch.nn as nn\n>>> l = nn.Linear(3,5)\n>>> w = list(l.parameters())\n>>> w\n
\n", "system": "" }, { "instruction": "Accuracy score in pyTorch LSTM", "input": "", "output": "

I would use numpy in order to not iterate the list in pure python.

\n\n

The results are the same, but it runs much faster

\n\n
def accuracy_score(y_true, y_pred):\n    y_pred = np.concatenate(tuple(y_pred))\n    y_true = np.concatenate(tuple([[t for t in y] for y in y_true])).reshape(y_pred.shape)\n    return (y_true == y_pred).sum() / float(len(y_true))\n
\n\n

And this is how to use it:

\n\n
#original code:\ny_pred = list(predict([s for s, t in training_data]))\ny_true = [t for s, t in training_data]\n#numpy accuracy score\nprint(accuracy_score(y_true, y_pred))\n
\n", "system": "" }, { "instruction": "PyTorch: Dataloader for time series task", "input": "", "output": "

I ended up writing custom dataset as well, though it's a bit different from the answer above:

\n
class TimeseriesDataset(torch.utils.data.Dataset):   \n    def __init__(self, X, y, seq_len=1):\n        self.X = X\n        self.y = y\n        self.seq_len = seq_len\n\n    def __len__(self):\n        return self.X.__len__() - (self.seq_len-1)\n\n    def __getitem__(self, index):\n        return (self.X[index:index+self.seq_len], self.y[index+self.seq_len-1])\n
\n

And the usage looks like that:

\n
train_dataset = TimeseriesDataset(X_lstm, y_lstm, seq_len=4)\ntrain_loader = torch.utils.data.DataLoader(train_dataset, batch_size = 3, shuffle = False)\n\nfor i, d in enumerate(train_loader):\n    print(i, d[0].shape, d[1].shape)\n\n>>>\n# shape: tuple((batch_size, seq_len, n_features), (batch_size))\n0 torch.Size([3, 4, 2]) torch.Size([3])\n
\n", "system": "" }, { "instruction": "What's the workaround for "ragged/jagged tensors" in PyTorch?", "input": "", "output": "

PyTorch is implementing something called NestedTensors which seems to have pretty much the same purpose as RaggedTensors in Tensorflow. You can follow the RFC and progress here.

\n", "system": "" }, { "instruction": "Suboptimal convergence in PyTorch compared to TensorFlow when using Adam optimizer", "input": "", "output": "

The default epsilon in TF is 1e-7 not 1e-8. See here and here.

\n", "system": "" }, { "instruction": "_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceIdEEPKNS_6detail12TypeMetaDataEv", "input": "", "output": "

This error usually shows up when there's a compatibility issue between the installed pytorch version and the detector library version(Detectron2 or mmdet).

\n

Both the detector library and pytorch have to be built by the same CUDA version otherwise some packages will conflict when training your model.

\n

There's a possibility that the Pytorch (1.8.1) + CUDA version (11.1) you have is incompatible with detectron2 v(0.4).

\n

From the repo here, detectron2 v0.4 is built with torch 1.8 + cuda 11.1. It might help if you use torch 1.8.0 instead of 1.8.1

\n", "system": "" }, { "instruction": "What output_padding does in nn.ConvTranspose2d?", "input": "", "output": "

According to documentation here: https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html when applying Conv2D operation with Stride > 1 you can get same output dimensions with different inputs. For example, 7x7 and 8x8 inputs would both return 3x3 output with Stride=2:

\n
import torch\n\nconv_inp1 = torch.rand(1,1,7,7)\nconv_inp2 = torch.rand(1,1,8,8)\n\nconv1 = torch.nn.Conv2d(1, 1, kernel_size = 3, stride = 2)\n\nout1 = conv1(conv_inp1)     \nout2 = conv1(conv_inp2)\nprint(out1.shape)         # torch.Size([1, 1, 3, 3])\nprint(out2.shape)         # torch.Size([1, 1, 3, 3])\n
\n

And when applying the transpose convolution, it is ambiguous that which output shape to return, 7x7 or 8x8 for stride=2 transpose convolution. Output padding helps pytorch to determine 7x7 or 8x8 output with output_padding parameter. Note that, it doesn't pad zeros or anything to output, it is just a way to determine the output shape and apply transpose convolution accordingly.

\n
conv_t1 = torch.nn.ConvTranspose2d(1, 1, kernel_size=3, stride=2)\nconv_t2 = torch.nn.ConvTranspose2d(1, 1, kernel_size=3, stride=2, output_padding=1)\ntransposed1 = conv_t1(out1)\ntransposed2 = conv_t2(out2)\n\nprint(transposed1.shape)      # torch.Size([1, 1, 7, 7])\nprint(transposed2.shape)      # torch.Size([1, 1, 8, 8])\n
\n", "system": "" }, { "instruction": "Transformer Model Output Nan Values in Pytorch", "input": "", "output": "

For what it's worth, I had this problem and it turned out that I had forgot to initialize an embedding vector, so it was just whatever torch.empty() happened to come upon (likely a lot of zeros.)

\n", "system": "" }, { "instruction": "How can I decrease Dedicated GPU memory usage and use Shared GPU memory for CUDA and Pytorch", "input": "", "output": "
\n

How can I decrease Dedicated GPU memory usage and use Shared GPU memory for CUDA and Pytorch

\n
\n

Short answer: you can not.

\n

Details: I believe this answer covers all the information that you need.

\n

You can reduce the amount of usage memory by lower the batch size as @John Stud commented, or using automatic mixed precision as @Dwight Foster suggested.

\n

While training, you can implement the gradient accumulation to reduce the batch size without affecting the performance.

\n", "system": "" }, { "instruction": "PyTorch: while loading batched data using Dataloader, how to transfer the data to GPU automatically", "input": "", "output": "

You can modify the collate_fn to handle several items at once:

\n
from torch.utils.data.dataloader import default_collate\n\ndevice = torch.device('cuda:0')  # or whatever device/cpu you like\n\n# the new collate function is quite generic\nloader = DataLoader(demo, batch_size=50, shuffle=True, \n                    collate_fn=lambda x: tuple(x_.to(device) for x_ in default_collate(x)))\n
\n

Note that if you want to have multiple workers for the dataloader, you'll need to add

\n
torch.multiprocessing.set_start_method('spawn')\n
\n

after your if __name__ == '__main__' (see this issue).

\n

Having said that, it seems like using pin_memory=True in your DataLoader would be much more efficient. Have you tried this option?
\nSee memory pinning for more information.

\n
\n

Update (Feb 8th, 2021)
\nThis post made me look at my "data-to-model" time spent during training.\nI compared three alternatives:

\n
    \n
  1. DataLoader works on CPU and only after the batch is retrieved data is moved to GPU.
  2. \n
  3. Same as (1) but with pin_memory=True in DataLoader.
  4. \n
  5. The proposed method of using collate_fn to move data to GPU.
  6. \n
\n

From my limited experimentation it seems like the second option performs best (but not by a big margin).
\nThe third option required fussing about the start_method of the data loader processes, and it seems to incur an overhead at the beginning of each epoch.

\n", "system": "" }, { "instruction": "Pytorch CUDA error: no kernel image is available for execution on the device on RTX 3090 with cuda 11.1", "input": "", "output": "

Found a fix for my problem here: https://github.com/pytorch/pytorch/issues/31285#issuecomment-739139454

\n
pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U\n
\n

Then my code snippet gives:

\n
A 3.7.5 (default, Nov  7 2019, 10:50:52) \n[GCC 8.3.0]\nB 1.8.0.dev20210115+cu110\nC True\nD True\nE _CudaDeviceProperties(name='GeForce RTX 3090', major=8, minor=6, total_memory=24267MB, multi_processor_count=82)\nF tensor([1., 2.], device='cuda:0')\n
\n", "system": "" }, { "instruction": "PyTorch : How to apply the same random transformation to multiple image?", "input": "", "output": "

Usually a workaround is to apply the transform on the first image, retrieve the parameters of that transform, then apply with a deterministic transform with those parameters on the remaining images. However, here RandomChoice does not provide an API to get the parameters of the applied transform since it involves a variable number of transforms.\nIn those cases, I usually implement an overwrite to the original function.

\n

Looking at the torchvision implementation, it's as simple as:

\n
class RandomChoice(RandomTransforms):\n    def __call__(self, img):\n        t = random.choice(self.transforms)\n        return t(img)\n
\n

Here are two possible solutions.

\n
    \n
  1. You can either sample from the transform list on __init__ instead of on __call__:

    \n
    import random\nimport torchvision.transforms as T\n\nclass RandomChoice(torch.nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.t = random.choice(self.transforms)\n\n    def __call__(self, img):\n        return self.t(img)\n
    \n

    So you can do:

    \n
    transform = RandomChoice([\n     T.RandomHorizontalFlip(), \n     T.RandomVerticalFlip()\n])\ndisplay(transform(img_a)) # both img_a and img_b will\ndisplay(transform(img_b)) # have the same transform\n\ntransform = RandomChoice([\n    T.RandomHorizontalFlip(), \n    T.RandomVerticalFlip()\n])\ndisplay(transform(img_c)) # both img_c and img_d will\ndisplay(transform(img_d)) # have the same transform\n
    \n
  2. \n
\n
\n
    \n
  1. Or better yet, transform the images in batch:

    \n
    import random\nimport torchvision.transforms as T\n\nclass RandomChoice(torch.nn.Module):\n    def __init__(self, transforms):\n       super().__init__()\n       self.transforms = transforms\n\n    def __call__(self, imgs):\n        t = random.choice(self.transforms)\n        return [t(img) for img in imgs]\n
    \n

    Which allows to do:

    \n
    transform = RandomChoice([\n     T.RandomHorizontalFlip(), \n     T.RandomVerticalFlip()\n])\n\nimg_at, img_bt = transform([img_a, img_b])\ndisplay(img_at) # both img_a and img_b will\ndisplay(img_bt) # have the same transform\n\nimg_ct, img_dt = transform([img_c, img_d])\ndisplay(img_ct) # both img_c and img_d will\ndisplay(img_dt) # have the same transform\n
    \n
  2. \n
\n", "system": "" }, { "instruction": "How to Use Class Weights with Focal Loss in PyTorch for Imbalanced dataset for MultiClass Classification", "input": "", "output": "

I think OP would've gotten his answer by now. I am writing this for other people who might ponder upon this.

\n

There in one problem in OPs implementation of Focal Loss:

\n
    \n
  1. F_loss = self.alpha * (1-pt)**self.gamma * BCE_loss
  2. \n
\n

In this line, the same alpha value is multiplied with every class output probability i.e. (pt). Additionally, code doesn't show how we get pt. A very good implementation of Focal Loss could be find in What is Focal Loss and when should you use it. But this implementation is only for binary classification as it has alpha and 1-alpha for two classes in self.alpha tensor.

\n

In case of multi-class classification or multi-label classification, self.alpha tensor should contain number of elements equal to the total number of labels. The values could be inverse label frequency of labels or inverse label normalized frequency (just be cautious with labels which has 0 as frequency).

\n", "system": "" }, { "instruction": "Error while installing PyTorch using pip - cannot build wheel", "input": "", "output": "

From your error:

\n
\n

Exception: You tried to install "pytorch". The package named for PyTorch is "torch"

\n
\n

which tells you what you need to know, instead of

\n
pip install pytorch\n
\n

it should be

\n
pip install torch\n
\n
\n

I downloaded the matching wheel from here, but am couldn't figure out what to do with it

\n
\n

Installing .whl files is as easy as

\n
pip install <path to .whl file>\n
\n
\n

My Python installation is using anaconda3

\n
\n

That is very relevant. You should generally avoid as much as possible to use pip in your conda environment. Instead, you can find the correct conda install command for your setup(cuda version etc.) from pytroch.org, e.g. for cuda 11 it would be

\n
conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch\n
\n", "system": "" }, { "instruction": "PyTorch torch.no_grad() versus requires_grad=False", "input": "", "output": "

This is an older discussion, which has changed slightly over the years (mainly due to the purpose of with torch.no_grad() as a pattern. An excellent answer that kind of answers your question as well can be found on Stackoverflow already.
\nHowever, since the original question is vastly different, I'll refrain from marking as duplicate, especially due to the second part about the memory.

\n

An initial explanation of no_grad is given here:

\n
\n

with torch.no_grad() is a context manager and is used to prevent calculating gradients [...].

\n
\n

requires_grad on the other hand is used

\n
\n

to freeze part of your model and train the rest [...].

\n
\n

Source again the SO post.

\n

Essentially, with requires_grad you are just disabling parts of a network, whereas no_grad will not store any gradients at all, since you're likely using it for inference and not training.
\nTo analyze the behavior of your combinations of parameters, let us investigate what is happening:

\n\n", "system": "" }, { "instruction": "PyTorch Lightning move tensor to correct device in validation_epoch_end", "input": "", "output": "

did you check part 3.4 (page 34) in the doc you linked ?

\n
\n

LightningModules know what device they are on! construct tensors on the device directly to avoid CPU->Device transfer

\n
\n
t = tensor.rand(2, 2).cuda()# bad\n(self is lightningModule)t = tensor.rand(2,2, device=self.device)# good \n
\n

I had a similar issue to create tensors this helped me. I hope it will help you too.

\n", "system": "" }, { "instruction": "RuntimeError: Given groups=1, weight of size [32, 3, 16, 16, 16], expected input[100, 16, 16, 16, 3] to have 3 channels, but got 16 channels instead", "input": "", "output": "

nn.Conv3d expects the input to have size [batch_size, channels, depth, height, width]. The first convolution expects 3 channels, but with your input having size [100, 16, 16, 16, 3], that would be 16 channels.

\n\n

Assuming that your data is given as [batch_size, depth, height, width, channels], you need to swap the dimensions around, which can be done with torch.Tensor.permute:

\n\n
# From: [batch_size, depth, height, width, channels]\n# To: [batch_size, channels, depth, height, width]\ninput = input.permute(0, 4, 1, 2, 3)\n
\n", "system": "" }, { "instruction": "Downloading transformers models to use offline", "input": "", "output": "

One relatively easy way to deal with this issue is to simply "rename" the pretrained models, as is detailed in this thread.

\n

Essentially, all you have to do is something like this for whatever model you're trying to work with:

\n
from transformers import BertModel\n\nmodel = BertModel.from_pretrained("bert-base-uncased")\nmodel.save_pretrained("./my_named_bert")\n
\n

The thread also details how the local model folders are named, see LysandreJik's post:

\n
\n

Hi, they are named as such because that's a clean way to make sure the model on the S3 is the same as the model in the cache. The name is created from the etag of the file hosted on the S3. [...]

\n
\n", "system": "" }, { "instruction": "How to fine tune BERT on unlabeled data?", "input": "", "output": "

The important distinction to make here is whether you want to fine-tune your model, or whether you want to expose it to additional pretraining.

\n

The former is simply a way to train BERT to adapt to a specific supervised task, for which you generally need in the order of 1000 or more samples including labels.

\n

Pretraining, on the other hand, is basically trying to help BERT better "understand" data from a certain domain, by basically continuing its unsupervised training objective ([MASK]ing specific words and trying to predict what word should be there), for which you do not need labeled data.

\n

If your ultimate objective is sentence embeddings, however, I would strongly suggest you to have a look at Sentence Transformers, which is based on a slightly outdated version of Huggingface's transformers library, but primarily tries to generate high-quality embeddings. Note that there are ways to train with surrogate losses, where you try to emulate some form ofloss that is relevant for embeddings.

\n

Edit: The author of Sentence-Transformers recently joined Huggingface, so I expect support to greatly improve over the upcoming months!

\n", "system": "" }, { "instruction": "How does one reset the dataloader in pytorch?", "input": "", "output": "

To reset a DataLoader then just enumerate the loader again. Each call to enumerate(loader) starts from the beginning.

\n\n

To not break transformers that use random values, then reset the random seed each time the DataLoader is initialized.

\n\n
def seed_init_fn(x):\n   seed = args.seed + x\n   np.random.seed(seed)\n   random.seed(seed)\n   torch.manual_seed(seed)\n   return\n\nloader = torch.utils.data.DataLoader(...., worker_init_fn = seed_init_fn)\n\nwhile True:\n   for i,data in enumerate(loader):\n      # will always yield same data\n
\n\n

See worker_init_fn in the documents:

\n\n

https://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader

\n\n

Here is a better example:

\n\n

https://github.com/pytorch/pytorch/issues/5059#issuecomment-404232359

\n", "system": "" }, { "instruction": "What does the copy_initial_weights documentation mean in the higher library for Pytorch?", "input": "", "output": "

Short version

\n\n

Call to higher.innerloop_ctx with model as argument create temporary patched model and unrolled optimizer for that model: (fmodel, diffopt). It is expected that in the inner loop fmodel will iteratively receive some input, compute output and loss and then diffopt.step(loss) will be called. Each time diffopt.step is called fmodel will create next version of parameters fmodel.parameters(time=T) which is a new tensor computed using previous ones (with the full graph allowing to compute gradients through the process). If at any point user calls backward on any tensor, regular pytorch gradient computation/accumulation will start in a way allowing gradients to propagate to e.g. optimizer's parameters (such as lr, momentum - if they were passed as tensors requiring gradients to higher.innerloop_ctx using override).

\n\n

Creation-time version of fmodel's parameters fmodel.parameters(time=0) is a copy of original model parameters. If copy_initial_weights=True provided (default) then fmodel.parameters(time=0) will be a clone+detach'ed version of model's parameters (i.e. will preserve values, but will severe all connections to the original model). If copy_initial_weights=False provided, then fmodel.parameters(time=0) will be clone'd version of model's parameters and thus will allow gradients to propagate to original model's parameters (see pytorch doc on clone).

\n\n

Terminology clarifications

\n\n\n\n

Long version

\n\n

Let's start from the beginning. Main functionality (only functionality, really) of higher library is unrolling of a model's parameter optimization in a differentiable manner. It can come either in the form of directly using differentiable optimizer through e.g. higher.get_diff_optim as in this example or in the form of higher.innerloop_ctx as in this example.

\n\n

The option with higher.innerloop_ctx is wrapping the creation of \"stateless\" model fmodel from existing model for you and gives you an \"optimizer\" diffopt for this fmodel. So as summarized in the README.md of higher it allows you to switch from:

\n\n
model = MyModel()\nopt = torch.optim.Adam(model.parameters())\n\nfor xs, ys in data:\n    opt.zero_grad()\n    logits = model(xs)\n    loss = loss_function(logits, ys)\n    loss.backward()\n    opt.step()\n
\n\n

to

\n\n
model = MyModel()\nopt = torch.optim.Adam(model.parameters())\n\nwith higher.innerloop_ctx(model, opt) as (fmodel, diffopt):\n    for xs, ys in data:\n        logits = fmodel(xs)  # modified `params` can also be passed as a kwarg\n        loss = loss_function(logits, ys)  # no need to call loss.backwards()\n        diffopt.step(loss)  # note that `step` must take `loss` as an argument!\n\n    # At the end of your inner loop you can obtain these e.g. ...\n    grad_of_grads = torch.autograd.grad(\n        meta_loss_fn(fmodel.parameters()), fmodel.parameters(time=0))\n
\n\n

The difference between training model and doing diffopt.step to update fmodel is that fmodel is not updating the parameters in-place as opt.step() in the original part would do. Instead each time diffopt.step is called new versions of parameters are created in such a way, that fmodel would use new ones for the next step, but all previous ones are still preserved.

\n\n

I.e. fmodel starts with only fmodel.parameters(time=0) available, but after you called diffopt.step N times you can ask fmodel to give you fmodel.parameters(time=i) for any i up to N inclusive. Notice that fmodel.parameters(time=0) doesn't change in this process at all, just every time fmodel is applied to some input it will use the latest version of parameters it currently has.

\n\n

Now, what exactly is fmodel.parameters(time=0)? It is created here and depends on copy_initial_weights. If copy_initial_weights==True then fmodel.parameters(time=0) are clone'd and detach'ed parameters of model. Otherwise they are only clone'd, but not detach'ed!

\n\n

That means that when we do meta-optimization step, the original model's parameters will actually accumulate gradients if and only if copy_initial_weights==False. And in MAML we want to optimize model's starting weights so we actually do need to get gradients from meta-optimization step.

\n\n

I think one of the issues here is that higher lack of simpler toy examples to demonstrate what is going on, instead rushing to show more serious things as the examples. So let me try to fill that gap here and demonstrate what is going on using the simplest toy example I could come up with (model with 1 weight which multiplies input by that weight):

\n\n
import torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport higher\nimport numpy as np\n\nnp.random.seed(1)\ntorch.manual_seed(3)\nN = 100\nactual_multiplier = 3.5\nmeta_lr = 0.00001\nloops = 5 # how many iterations in the inner loop we want to do\n\nx = torch.tensor(np.random.random((N,1)), dtype=torch.float64) # features for inner training loop\ny = x * actual_multiplier # target for inner training loop\nmodel = nn.Linear(1, 1, bias=False).double() # simplest possible model - multiple input x by weight w without bias\nmeta_opt = optim.SGD(model.parameters(), lr=meta_lr, momentum=0.)\n\n\ndef run_inner_loop_once(model, verbose, copy_initial_weights):\n    lr_tensor = torch.tensor([0.3], requires_grad=True)\n    momentum_tensor = torch.tensor([0.5], requires_grad=True)\n    opt = optim.SGD(model.parameters(), lr=0.3, momentum=0.5)\n    with higher.innerloop_ctx(model, opt, copy_initial_weights=copy_initial_weights, override={'lr': lr_tensor, 'momentum': momentum_tensor}) as (fmodel, diffopt):\n        for j in range(loops):\n            if verbose:\n                print('Starting inner loop step j=={0}'.format(j))\n                print('    Representation of fmodel.parameters(time={0}): {1}'.format(j, str(list(fmodel.parameters(time=j)))))\n                print('    Notice that fmodel.parameters() is same as fmodel.parameters(time={0}): {1}'.format(j, (list(fmodel.parameters())[0] is list(fmodel.parameters(time=j))[0])))\n            out = fmodel(x)\n            if verbose:\n                print('    Notice how `out` is `x` multiplied by the latest version of weight: {0:.4} * {1:.4} == {2:.4}'.format(x[0,0].item(), list(fmodel.parameters())[0].item(), out[0].item()))\n            loss = ((out - y)**2).mean()\n            diffopt.step(loss)\n\n        if verbose:\n            # after all inner training let's see all steps' parameter tensors\n            print()\n            print(\"Let's print all intermediate parameters versions after inner loop is done:\")\n            for j in range(loops+1):\n                print('    For j=={0} parameter is: {1}'.format(j, str(list(fmodel.parameters(time=j)))))\n            print()\n\n        # let's imagine now that our meta-learning optimization is trying to check how far we got in the end from the actual_multiplier\n        weight_learned_after_full_inner_loop = list(fmodel.parameters())[0]\n        meta_loss = (weight_learned_after_full_inner_loop - actual_multiplier)**2\n        print('  Final meta-loss: {0}'.format(meta_loss.item()))\n        meta_loss.backward() # will only propagate gradient to original model parameter's `grad` if copy_initial_weight=False\n        if verbose:\n            print('  Gradient of final loss we got for lr and momentum: {0} and {1}'.format(lr_tensor.grad, momentum_tensor.grad))\n            print('  If you change number of iterations \"loops\" to much larger number final loss will be stable and the values above will be smaller')\n        return meta_loss.item()\n\nprint('=================== Run Inner Loop First Time (copy_initial_weights=True) =================\\n')\nmeta_loss_val1 = run_inner_loop_once(model, verbose=True, copy_initial_weights=True)\nprint(\"\\nLet's see if we got any gradient for initial model parameters: {0}\\n\".format(list(model.parameters())[0].grad))\n\nprint('=================== Run Inner Loop Second Time (copy_initial_weights=False) =================\\n')\nmeta_loss_val2 = run_inner_loop_once(model, verbose=False, copy_initial_weights=False)\nprint(\"\\nLet's see if we got any gradient for initial model parameters: {0}\\n\".format(list(model.parameters())[0].grad))\n\nprint('=================== Run Inner Loop Third Time (copy_initial_weights=False) =================\\n')\nfinal_meta_gradient = list(model.parameters())[0].grad.item()\n# Now let's double-check `higher` library is actually doing what it promised to do, not just giving us\n# a bunch of hand-wavy statements and difficult to read code.\n# We will do a simple SGD step using meta_opt changing initial weight for the training and see how meta loss changed\nmeta_opt.step()\nmeta_opt.zero_grad()\nmeta_step = - meta_lr * final_meta_gradient # how much meta_opt actually shifted inital weight value\nmeta_loss_val3 = run_inner_loop_once(model, verbose=False, copy_initial_weights=False)\n\nmeta_loss_gradient_approximation = (meta_loss_val3 - meta_loss_val2) / meta_step\n\nprint()\nprint('Side-by-side meta_loss_gradient_approximation and gradient computed by `higher` lib: {0:.4} VS {1:.4}'.format(meta_loss_gradient_approximation, final_meta_gradient))\n
\n\n

Which produces this output:

\n\n
=================== Run Inner Loop First Time (copy_initial_weights=True) =================\n\nStarting inner loop step j==0\n    Representation of fmodel.parameters(time=0): [tensor([[-0.9915]], dtype=torch.float64, requires_grad=True)]\n    Notice that fmodel.parameters() is same as fmodel.parameters(time=0): True\n    Notice how `out` is `x` multiplied by the latest version of weight: 0.417 * -0.9915 == -0.4135\nStarting inner loop step j==1\n    Representation of fmodel.parameters(time=1): [tensor([[-0.1217]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    Notice that fmodel.parameters() is same as fmodel.parameters(time=1): True\n    Notice how `out` is `x` multiplied by the latest version of weight: 0.417 * -0.1217 == -0.05075\nStarting inner loop step j==2\n    Representation of fmodel.parameters(time=2): [tensor([[1.0145]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    Notice that fmodel.parameters() is same as fmodel.parameters(time=2): True\n    Notice how `out` is `x` multiplied by the latest version of weight: 0.417 * 1.015 == 0.4231\nStarting inner loop step j==3\n    Representation of fmodel.parameters(time=3): [tensor([[2.0640]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    Notice that fmodel.parameters() is same as fmodel.parameters(time=3): True\n    Notice how `out` is `x` multiplied by the latest version of weight: 0.417 * 2.064 == 0.8607\nStarting inner loop step j==4\n    Representation of fmodel.parameters(time=4): [tensor([[2.8668]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    Notice that fmodel.parameters() is same as fmodel.parameters(time=4): True\n    Notice how `out` is `x` multiplied by the latest version of weight: 0.417 * 2.867 == 1.196\n\nLet's print all intermediate parameters versions after inner loop is done:\n    For j==0 parameter is: [tensor([[-0.9915]], dtype=torch.float64, requires_grad=True)]\n    For j==1 parameter is: [tensor([[-0.1217]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    For j==2 parameter is: [tensor([[1.0145]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    For j==3 parameter is: [tensor([[2.0640]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    For j==4 parameter is: [tensor([[2.8668]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n    For j==5 parameter is: [tensor([[3.3908]], dtype=torch.float64, grad_fn=<AddBackward0>)]\n\n  Final meta-loss: 0.011927987982895929\n  Gradient of final loss we got for lr and momentum: tensor([-1.6295]) and tensor([-0.9496])\n  If you change number of iterations \"loops\" to much larger number final loss will be stable and the values above will be smaller\n\nLet's see if we got any gradient for initial model parameters: None\n\n=================== Run Inner Loop Second Time (copy_initial_weights=False) =================\n\n  Final meta-loss: 0.011927987982895929\n\nLet's see if we got any gradient for initial model parameters: tensor([[-0.0053]], dtype=torch.float64)\n\n=================== Run Inner Loop Third Time (copy_initial_weights=False) =================\n\n  Final meta-loss: 0.01192798770078706\n\nSide-by-side meta_loss_gradient_approximation and gradient computed by `higher` lib: -0.005311 VS -0.005311\n
\n", "system": "" }, { "instruction": "Difference between Keras' BatchNormalization and PyTorch's BatchNorm2d?", "input": "", "output": "

Keras treats as parameters (weights) many things that will be \"saved/loaded\" in the layer.

\n\n

While both implementations naturally have the accumulated \"mean\" and \"variance\" of the batches, these values are not trainable with backpropagation.

\n\n

Nevertheless, these values are updated every batch, and Keras treats them as non-trainable weights, while PyTorch simply hides them. The term \"non-trainable\" here means \"not trainable by backpropagation\", but doesn't mean the values are frozen.

\n\n

In total they are 4 groups of \"weights\" for a BatchNormalization layer. Considering the selected axis (default = -1, size=32 for your layer)

\n\n\n\n

The advantage of having it like this in Keras is that when you save the layer, you also save the mean and variance values the same way you save all other weights in the layer automatically. And when you load the layer, these weights are loaded together.

\n", "system": "" }, { "instruction": "How can I load a model in PyTorch without redefining the model?", "input": "", "output": "

You can attempt to export your model to TorchScript using tracing. This has limitations. Due to the way PyTorch constructs the model's computation graph on the fly, if you have any control-flow in your model then the exported model may not completely represent your python module. TorchScript is only supported in PyTorch >= 1.0.0, though I would recommend using the latest version possible.

\n\n

For example, a model without any conditional behavior is fine

\n\n
from torch import nn\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(3, 10, 3, padding=1)\n        self.bn1 = nn.BatchNorm2d(10)\n        self.conv2 = nn.Conv2d(10, 20, 3, padding=1)\n        self.bn2 = nn.BatchNorm2d(20)\n        self.fc = nn.Linear(20 * 4 * 4, 2)\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = F.relu(x)\n        x = F.max_pool2d(x, 2, 2)\n        x = self.bn1(x)\n        x = self.conv2(x)\n        x = F.relu(x)\n        x = F.max_pool2d(x, 2, 2)\n        x = self.bn2(x)\n        x = self.fc(x.flatten(1))\n        return x\n
\n\n

We can export this as follows

\n\n
from torch import jit\n\nnet = Model()\n# ... train your model\n\n# put model in the mode you want to export (see bolded comment below)\nnet.eval()\n\n# print example output\nx = torch.ones(1, 3, 16, 16)\nprint(net(x))\n\n# create TorchScript by tracing the computation graph with an example input\nx = torch.ones(1, 3, 16, 16)\nnet_trace = jit.trace(net, x)\njit.save(net_trace, 'model.zip')\n
\n\n

If successful then we can load our model into a new python script without using Model.

\n\n
from torch import jit\nnet = jit.load('model.zip')\n\n# print example output (should be same as during save)\nx = torch.ones(1, 3, 16, 16)\nprint(net(x))\n
\n\n

The loaded model is also trainable, however, the loaded model will only behave in the mode it was exported in. For example, in this case we exported our model in eval() mode, so using net.train() on the loaded module will have no effect.

\n\n
\n\n

Control-flow

\n\n

A model like this, which has behavior that changes between passes won't be properly exported. Only the code evaluated during jit.trace will be exported.

\n\n
from torch import nn\n\nclass Model(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv2d(3, 10, 3, padding=1)\n        self.bn1 = nn.BatchNorm2d(10)\n        self.conv2 = nn.Conv2d(10, 20, 3, padding=1)\n        self.bn2 = nn.BatchNorm2d(20)\n        self.fca = nn.Linear(20 * 4 * 4, 2)\n        self.fcb = nn.Linear(20 * 4 * 4, 2)\n\n        self.use_a = True\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = F.relu(x)\n        x = F.max_pool2d(x, 2, 2)\n        x = self.bn1(x)\n        x = self.conv2(x)\n        x = F.relu(x)\n        x = F.max_pool2d(x, 2, 2)\n        x = self.bn2(x)\n        if self.use_a:\n            x = self.fca(x.flatten(1))\n        else:\n            x = self.fcb(x.flatten(1))\n        return x\n
\n\n

We can still export the model as follows

\n\n
import torch\nfrom torch import jit\n\nnet = Model()\n# ... train your model\n\nnet.eval()\n\n# print example input\nx = torch.ones(1, 3, 16, 16)\nnet.use_a = True\nprint('a:', net(x))\nnet.use_a = False\nprint('b:', net(x))\n\n# save model\nx = torch.ones(1, 3, 16, 16)\nnet_trace = jit.trace(net, x)\njit.save(net_trace, \"model.ts\")\n
\n\n

In this case the example outputs are

\n\n
a: tensor([[-0.0959,  0.0657]], grad_fn=<AddmmBackward>)\nb: tensor([[ 0.1437, -0.0033]], grad_fn=<AddmmBackward>)\n
\n\n

However, loading

\n\n
import torch\nfrom torch import jit\n\nnet = jit.load(\"model.ts\")\n\n# will not match the output from before\nx = torch.ones(1, 3, 16, 16)\nnet.use_a = True\nprint('a:', net(x))\nnet.use_a = False\nprint('b:', net(x))\n
\n\n

results in

\n\n
a: tensor([[ 0.1437, -0.0033]], grad_fn=<DifferentiableGraphBackward>)\nb: tensor([[ 0.1437, -0.0033]], grad_fn=<DifferentiableGraphBackward>)\n
\n\n

Notice that the logic of the branch \"a\" is not present since net.use_a was False when jit.trace was called.

\n\n
\n\n

Scripting

\n\n

These limitations can be overcome but require some effort on your end. You can use the scripting functionality to ensure that all the logic is exported.

\n", "system": "" }, { "instruction": "Expected object of device type cuda but got device type cpu in Pytorch", "input": "", "output": "

In the forward of your MSE_loss, you define a linear layer that is probably still in the CPU (you didn't provide an MCVE, so I can only assume):

\n\n
self.linear = nn.Linear(output1.size()[0], 1)\n
\n\n

If you want to try and see if this is the problem, you can:

\n\n
self.linear = nn.Linear(output1.size()[0], 1).cuda()\n
\n\n

However, if self.d is in the CPU, then it would fail again. To solve this, you could move the linear to the same device of the self.d tensor by doing this:

\n\n
def forward(self, output1, output2, labels):\n    self.labels = labels         \n    self.linear = nn.Linear(output1.size()[0], 1)\n\n    if self.metric == 'cos':\n        self.d = F.cosine_similarity(output1, output2)\n    elif self.metric == 'l1':\n        self.d = torch.abs(output1-output2)\n    elif self.metric == 'l2':\n        self.d = torch.sqrt((output1-output2)**2)\n\n    # move self.linear to the correct device\n    self.linear = self.linear.to(self.d.device)\n
\n", "system": "" }, { "instruction": "ModuleNotFoundError: No module named 'past' when installing tensorboard with pytorch 1.2", "input": "", "output": "

Following this issue: https://github.com/pytorch/pytorch/issues/22389,

\n\n

Adding future to the list of requirements solved the problem

\n\n
# requirements.txt:\ntb-nightly\nfuture\n
\n\n

pip install -r requirements.txt

\n", "system": "" }, { "instruction": "How to save and load random number generator state in Pytorch?", "input": "", "output": "

You can use torch.get_rng_state and torch.set_rng_state

\n\n

When calling torch.get_rng_state you will get your random number generator state as a torch.ByteTensor.

\n\n

You can then save this tensor somewhere in a file and later you can load and use torch.set_rng_state to set the random number generator state.

\n\n
\n\n

When using numpy you can of course do the same there using:
\nnumpy.random.get_state and numpy.random.set_state

\n", "system": "" }, { "instruction": "Pytorch inputs for nn.CrossEntropyLoss()", "input": "", "output": "

For the most part, the PyTorch documentation does an amazing job to explain the different functions; they usually do include expected input dimensions, as well as some simple examples.
\nYou can find the description for nn.CrossEntropyLoss() here.

\n\n

To walk through your specific example, let us start by looking at the expected input dimension:

\n\n
\n

Input: (N,C) where C = number of classes. [...]

\n
\n\n

To add to this, N generally refers to the batch size (number of samples). To compare this to what you currently have:

\n\n
outputs.shape\n>>> torch.Size([2])\n
\n\n

I.e. currently we only have an input dimension of (2,), and not (1,2), as is expected by PyTorch. We can alleviate this by adding a \"fake\" dimension to our current tensor, by simply using .unsqueeze() like so:

\n\n
outputs = binary_model(tensor_input).unsqueeze(dim=0)\noutputs.shape\n>>> torch.Size([1,2])\n
\n\n

Now that we got that, let us look at the expected input for the targets:

\n\n
\n

Target: (N) [...]

\n
\n\n

So we already got the right shape for this. If we try this, though, we will still encounter an error, though:

\n\n
RuntimeError: Expected object of scalar type Long but got scalar type Float \n              for argument #2 'target'.\n
\n\n

Again, the error message is rather expressive. The problem here is that PyTorch tensors (by default) are interpreted as torch.FloatTensors, but the input should be integers (or Long) instead. We can simply do this by specifying the exact type during tensor creations:

\n\n
tensor_label = torch.LongTensor([dat[1]])\n
\n\n

I'm using PyTorch 1.0 under Linux fyi.

\n", "system": "" }, { "instruction": "How do I install PyTorch v1.0.0+ on Google Colab?", "input": "", "output": "

try the following code snippet (it works equally for the runtime with or without gpu)

\n\n
!pip install -q torch==1.0.0 torchvision\n
\n\n

to check the version

\n\n
import torch\nprint(torch.__version__)\n
\n\n

here you have the version 1.0.0

\n\n

UPDATE

\n\n
!pip install torch\n
\n\n

Works fine now, as the most stable version is 1.0.0

\n", "system": "" }, { "instruction": "Confusion matrix and test accuracy for PyTorch Transfer Learning tutorial", "input": "", "output": "

Answer given by ptrblck of PyTorch community. Thanks a lot!

\n\n
nb_classes = 9\n\nconfusion_matrix = torch.zeros(nb_classes, nb_classes)\nwith torch.no_grad():\n    for i, (inputs, classes) in enumerate(dataloaders['val']):\n        inputs = inputs.to(device)\n        classes = classes.to(device)\n        outputs = model_ft(inputs)\n        _, preds = torch.max(outputs, 1)\n        for t, p in zip(classes.view(-1), preds.view(-1)):\n                confusion_matrix[t.long(), p.long()] += 1\n\nprint(confusion_matrix)\n
\n\n

To get the per-class accuracy:

\n\n
print(confusion_matrix.diag()/confusion_matrix.sum(1))\n
\n", "system": "" }, { "instruction": "AttributeError: 'Tensor' has no attribute: 'backwards'", "input": "", "output": "

It's loss.backward(), not loss.backwards().

\n", "system": "" }, { "instruction": "Using TPUs with PyTorch", "input": "", "output": "

Currently, it's not possible to use Cloud TPU with PyTorch since it's designed specifically for Tensorflow.

\n\n

But, according to this product news posted three days ago in the Google Cloud blog, \"engineers on Google\u2019s TPU team are actively collaborating with core PyTorch developers to connect PyTorch to Cloud TPUs\".

\n", "system": "" }, { "instruction": "Pytorch RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'", "input": "", "output": "

I know where the problem is.

\n\n

y should be in torch.int64 dtype without one-hot encoding.\nAnd CrossEntropyLoss() will auto encoding it with one-hot (while out is the probability distribution of prediction like one-hot format).

\n\n

It can run now!

\n", "system": "" }, { "instruction": "tensorflow stop_gradient equivalent in pytorch", "input": "", "output": "

Could you check with x.detach().

\n", "system": "" }, { "instruction": "PyTorch : predict single example", "input": "", "output": "

The code you posted is a simple demo trying to reveal the inner mechanism of such deep learning frameworks. These frameworks, including PyTorch, Keras, Tensorflow and many more automatically handle the forward calculation, the tracking and applying gradients for you as long as you defined the network structure. However, the code you showed still try to do these stuff manually. That's the reason why you feel cumbersome when predicting one example, because you are still doing it from scratch.

\n

In practice, we will define a model class inherited from torch.nn.Module and initialize all the network components (like neural layer, GRU, LSTM layer etc.) in the __init__ function, and define how these components interact with the network input in the forward function.

\n

Taken the example from the page you've provided:

\n
# Code in file nn/two_layer_net_module.py\nimport torch\n\nclass TwoLayerNet(torch.nn.Module):\n    def __init__(self, D_in, H, D_out):\n        """\n        In the constructor we instantiate two nn.Linear modules and \n        assign them as\n        member variables.\n        """\n        super(TwoLayerNet, self).__init__()\n        self.linear1 = torch.nn.Linear(D_in, H)\n        self.linear2 = torch.nn.Linear(H, D_out)\n\n    def forward(self, x):\n        """\n        In the forward function we accept a Tensor of input data and we must return\n        a Tensor of output data. We can use Modules defined in the constructor as\n        well as arbitrary (differentiable) operations on Tensors.\n        """\n        h_relu = self.linear1(x).clamp(min=0)\n        y_pred = self.linear2(h_relu)\n        return y_pred\n\n# N is batch size; D_in is input dimension;\n# H is hidden dimension; D_out is output dimension.\nN, D_in, H, D_out = 64, 1000, 100, 10\n\n# Create random Tensors to hold inputs and outputs\nx = torch.randn(N, D_in)\ny = torch.randn(N, D_out)\n\n# Construct our model by instantiating the class defined above.\nmodel = TwoLayerNet(D_in, H, D_out)\n\n# Construct our loss function and an Optimizer. The call to \nmodel.parameters()\n# in the SGD constructor will contain the learnable parameters of the two\n# nn.Linear modules which are members of the model.\nloss_fn = torch.nn.MSELoss(size_average=False)\noptimizer = torch.optim.SGD(model.parameters(), lr=1e-4)\nfor t in range(500):\n    # Forward pass: Compute predicted y by passing x to the model\n    y_pred = model(x)\n\n    # Compute and print loss\n    loss = loss_fn(y_pred, y)\n    print(t, loss.item())\n\n    # Zero gradients, perform a backward pass, and update the weights.\n    optimizer.zero_grad()\n    loss.backward()\n    optimizer.step()\n
\n

The code defined a model named TwoLayerNet, it initializes two linear layers in the __init__ function and further defines how these two linears interact with the input x in the forward function.

\n

Having the model defined, we can perform a single feed-forward operation as follows. Say xu contains a single unseen example:

\n
xu = torch.randn(D_in)\n
\n

Then this performs the prediction:

\n
y_pred = model(torch.atleast_2d(xu))\n
\n", "system": "" }, { "instruction": "Resize PyTorch Tensor", "input": "", "output": "

You can instead choose to go with tensor.reshape(new_shape) or torch.reshape(tensor, new_shape) as in:

\n\n
# a `Variable` tensor\nIn [15]: ten = torch.randn(6, requires_grad=True)\n\n# this would throw RuntimeError error\nIn [16]: ten.resize_(2, 3)\n---------------------------------------------------------------------------\nRuntimeError                              Traceback (most recent call last)\n<ipython-input-16-094491c46baa> in <module>()\n----> 1 ten.resize_(2, 3)\n\nRuntimeError: cannot resize variables that require grad\n
\n\n
\n\n

The above RuntimeError can be resolved or avoided by using tensor.reshape(new_shape)

\n\n
In [17]: ten.reshape(2, 3)\nOut[17]: \ntensor([[-0.2185, -0.6335, -0.0041],\n        [-1.0147, -1.6359,  0.6965]])\n\n# yet another way of changing tensor shape\nIn [18]: torch.reshape(ten, (2, 3))\nOut[18]: \ntensor([[-0.2185, -0.6335, -0.0041],\n        [-1.0147, -1.6359,  0.6965]])\n
\n", "system": "" }, { "instruction": "UserWarning: Implicit dimension choice for log_softmax has been deprecated", "input": "", "output": "

It is almost always you will need the last dimension when you compute the cross-entropy so your line may look like:

\n\n
torch.nn.functional.log_softmax(x, -1)\n
\n", "system": "" }, { "instruction": "How to use groups parameter in PyTorch conv2d function", "input": "", "output": "

If you want to apply a per-channel convolution then your out-channel should be the same as your in-channel. This is expected, considering each of your input channels creates a separate output channel that it corresponds to.

\n\n

In short, this will work

\n\n
import torch\nimport torch.nn.functional as F\n\nfilters = torch.autograd.Variable(torch.randn(3,1,3,3))\ninputs = torch.autograd.Variable(torch.randn(1,3,10,10))\nout = F.conv2d(inputs, filters, padding=1, groups=3)\n
\n\n

whereas, filters of size (2, 1, 3, 3) or (1, 1, 3, 3) will not work.

\n\n

Additionally, you can also make your out-channel a multiple of in-channel. This works for instances where you want to have multiple convolution filters for each input channel.

\n\n

However, This only makes sense if it is a multiple. If not, then pytorch falls back to its closest multiple, a number less than what you specified. This is once again expected behavior. For example a filter of size (4, 1, 3, 3) or (5, 1, 3, 3), will result in an out-channel of size 3.

\n", "system": "" }, { "instruction": "Any pytorch tools to monitor neural network's training?", "input": "", "output": "

PyTorch 1.1.0 supports TensorBoard natively with torch.utils.tensorboard. The API is very similar to tensorboardX. See the documentation for more details.

\n", "system": "" }, { "instruction": "How padded sequences given as packed sequences are dealt by RNN in pytorch?", "input": "", "output": "

For the second question: hidden states at padded sequences will not be computed.

\n\n

To answer how is that happening, let's first see what pack_padded_sequence does for us:

\n\n
from torch.nn.utils.rnn import pad_sequence, pad_packed_sequence, pack_padded_sequence\n\nraw = [ torch.ones(25, 300) / 2, \n        torch.ones(22, 300) / 2.3, \n        torch.ones(15, 300) / 3.2 ]\npadded = pad_sequence(raw)  # size: [25, 3, 300]\n\nlengths = torch.as_tensor([25, 22, 15], dtype=torch.int64)\npacked = pack_padded_sequence(padded, lengths)\n
\n\n

so far we randomly created a three tensor with different length (timestep in the context of RNN) , and we first pad them to the same length, then packed it. Now if we run

\n\n
print(padded.size())\nprint(packed.data.size()) # packed.data refers to the \"packed\" tensor\n
\n\n

we will see:

\n\n
torch.Size([25, 3, 300])\ntorch.Size([62, 300])\n
\n\n

Obviously 62 does not come from 25 * 3. So what pack_padded_sequence does is only keep the meaningful timestep of each batch entry according to the lengths tensor we passed to pack_padded_sequence (i.e. if we passed [25, 25, 25] to it, the size of packed.data would still be [75, 300] even though the raw tensor does not change). In short, rnn would no even see the pad timestep with pack_padded_sequence

\n\n

And now let's see what's the difference after we pass padded and packed to rnn

\n\n
rnn = torch.nn.RNN(input_size=300, hidden_size=2)\npadded_outp, padded_hn = rnn(padded) # size: [25, 3, 2] / [1, 3, 2]\npacked_outp, packed_hn = rnn(packed) # 'PackedSequence' Obj / [1, 3, 2]\nundo_packed_outp, _ = pad_packed_sequence(packed_outp)\n\n# return \"h_n\"\nprint(padded_hn) # tensor([[[-0.2329, -0.6179], [-0.1158, -0.5430],[ 0.0998, -0.3768]]]) \nprint(packed_hn) # tensor([[[-0.2329, -0.6179], [ 0.5622,  0.1288], [ 0.5683,  0.1327]]]\n\n# the output of last timestep (the 25-th timestep)\nprint(padded_outp[-1]) # tensor([[[-0.2329, -0.6179], [-0.1158, -0.5430],[ 0.0998, -0.3768]]]) \nprint(undo_packed_outp.data[-1]) # tensor([[-0.2329, -0.6179], [ 0.0000,  0.0000], [ 0.0000,  0.0000]]\n
\n\n

The values of padded_hn and packed_hn are different since rnn DOES compute the pad for padded yet not for the 'packed' (PackedSequence object), which also can be observed from the last hidden state: all three batch entry in padded got non-zero last hidden state even if its length is less than 25. But for packed, the last hidden state for shorter data is not computed (i.e. 0)

\n\n

p.s. another observation:

\n\n
print([(undo_packed_outp[:, i, :].sum(-1) != 0).sum() for i in range(3)])\n
\n\n

would give us [tensor(25), tensor(22), tensor(15)], which align to the actual length of our input.

\n", "system": "" }, { "instruction": "when is a pytorch custom function needed (rather than only a module)?", "input": "", "output": "

This information is gathered and summarised from the official PyTorch Documentaion.

\n\n

torch.autograd.Functionreally lies at the heart of the autograd package in PyTorch. Any graph you build in PyTorch and any operation you conduct on Variables in PyTorch is based on a Function. Any function requires an __init__(), forward() and backward() method (see more here: http://pytorch.org/docs/notes/extending.html) . This enables PyTorch to compute results and compute gradients for Variables.

\n\n

nn.Module()in contrast is really just a convenience for organising your model, your different layers, etc. For example, it organises all the trainable parameters in your model in .parameters()and allows you to add another layer to a model easily, etc. etc. It is not the place where you define a backward method, because in the forward() method, you're supposed to use subclasses of Function(), for which you have already defined backward(). Hence, if you have specified the order of operations in forward(), PyTorch already knows how to back-propagate gradients.

\n\n

Now, when should you use what?

\n\n

If you have an operation that is just a composition of existing implemented functions in PyTorch (like your thing above), there's really no point adding any subclass to Function() yourself. Because you can just stack operations up and build a dynamic graph. It's however a sensible idea to bunch these operations together. If any operation involves trainable parameters (for example a linear layer of a neural network), you should subclass nn.Module() and bunch your operations together in the forward method. This allows you to easily access parameters (as outlined above) for use of torch.optim, etc. If you don't have any trainable parameters, I would probably still bunch them together, but a standard Python function, where you take care of the instantination of each operation you use would be sufficient.

\n\n

If you have a new custom operation (e.g. a new stochastic layer with some complicated sampling procedure), you should subclass Function() and define __init__(), forward() and backward() to tell PyTorch how to compute results and how to compute gradients, when you use this operation. Afterwards, you should either create a functional version to take care of instantinating the function and use your operation or create a module, if your operation has trainable parameters. Again, you can read more about this in the link above.

\n", "system": "" }, { "instruction": "UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR", "input": "", "output": "

In version 2.3.0 of pytorch, it prints this unwanted warning even if no exception is thrown: see https://github.com/pytorch/pytorch/pull/125790

\n

As you mentioned, though, the training is processing correctly. If you want to get rid of this warning, you should revert to torch 2.2.2 (you then also have to revert torchvision to 0.17.2):

\n
pip3 install torchvision==0.17.2\npip3 install torch==2.2.2\n
\n", "system": "" }, { "instruction": "How retain_grad() in pytorch works? I found its position changes the grad result", "input": "", "output": "

Okay so what's going on is really weird.

\n

What .retain_grad() essentially does is convert any non-leaf tensor into a leaf tensor, such that it contains a .grad attribute (since by default, pytorch computes gradients to leaf tensors only).

\n

Hence, in your first example, after calling y.retain_grad(), it basically converted y into a leaf tensor with an accessible .grad attribute.

\n

However, in your second example, you initially converted the entire y tensor into a leaf tensor; then, you created a non-leaf tensor (y[1]) within your leaf tensor (y), which is what caused the confusion.

\n
y = torch.zeros((10))  # y is a non-leaf tensor\n\ny[0] = a  # y[0] is a non-leaf tensor\ny.retain_grad()  # y is a leaf tensor (including y[1])\ny[1] = y[0] * 2  # y[1] is a non-leaf tensor, BUT y[0], y[2], y[3], ..., y[9] are all leaf tensors!\n
\n

The confusing part is:

\n

y[1] after calling y.retain_grad() is now a leaf tensor with a .grad attribute. However, y[1] after the computation (y[1] = y[0] * 2) is now not a leaf tensor with a .grad attribute; it is now treated as a new non-leaf variable/tensor.

\n

Therefore, when calling loss.backward(), the Chain rule of the loss w.r.t y, and particularly looking at the Chain rule of the loss w.r.t leaf y[1] now looks something like this:

\n
\n

\"Chain

\n", "system": "" }, { "instruction": "Is GradScaler necessary with Mixed precision training with pytorch?", "input": "", "output": "

Short answer: yes, your model may fail to converge without GradScaler().

\n

There are three basic problems with using FP16:

\n\n", "system": "" }, { "instruction": "HuggingFace: ValueError: expected sequence of length 165 at dim 1 (got 128)", "input": "", "output": "

I fixed this solution by changing the tokenize function to:

\n
def tokenize_function(examples):\n    return tokenizer(examples['text'], padding='max_length', truncation=True, max_length=max_length)\n
\n

(note the padding argument). Also, I used a data collator like so:

\n
data_collator = DataCollatorForLanguageModeling(\n    tokenizer=tokenizer, mlm=True, mlm_probability=0.15\n)\ntrainer = Trainer(\n        model=model,\n        args=training_args,\n        data_collator=data_collator,\n        train_dataset=train_dataset,\n        eval_dataset=eval_dataset\n)\n
\n", "system": "" }, { "instruction": "Understanding of Pytorch NLLLOSS", "input": "", "output": "

Indeed no log is being used to compute the result of nn.NLLLoss so this can be a little confusing. However, I believe the reason why it was called this way is because it expects to receive log-probabilities:

\n
\n

The input given through a forward call is expected to contain log-probabilities of each class. - docs

\n
\n

In the end it does not make much sense to have it in the name since you might as well want to apply this function on non-log-probabilities...

\n", "system": "" }, { "instruction": "What does the difference between 'torch.backends.cudnn.deterministic=True' and 'torch.set_deterministic(True)'?", "input": "", "output": "

torch.backends.cudnn.deterministic=True only applies to CUDA convolution operations, and nothing else. Therefore, no, it will not guarantee that your training process is deterministic, since you're also using torch.nn.MaxPool3d, whose backward function is nondeterministic for CUDA.

\n

torch.set_deterministic(), on the other hand, affects all the normally-nondeterministic operations listed here (note that set_deterministic has been renamed to use_deterministic_algorithms in 1.8): https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html?highlight=use_deterministic#torch.use_deterministic_algorithms

\n

As the documentation states, some of the listed operations don't have a deterministic implementation. So if torch.use_deterministic_algorithms(True) is set, they will throw an error.

\n

If you need to use nondeterministic operations like torch.nn.MaxPool3d, then, at the moment, there is no way for your training process to be deterministic--unless you write a custom deterministic implementation yourself. Or you could open a GitHub issue requesting a deterministic implementation: https://github.com/pytorch/pytorch/issues

\n

In addition, you might want to check out this page: https://pytorch.org/docs/stable/notes/randomness.html

\n", "system": "" }, { "instruction": "How to handle odd resolutions in Unet architecture PyTorch", "input": "", "output": "

It is a very common problem in segmentation networks where skip-connections are often involved in the decoding process. Networks usually (depending on the actual architecture) require input size that has side lengths as integer multiples of the largest stride (8, 16, 32, etc.).

\n

There are two main ways:

\n
    \n
  1. Resize input to the nearest feasible size.
  2. \n
  3. Pad the input to the next larger feasible size.
  4. \n
\n

I prefer (2) because (1) can cause small changes in the pixel level for all the pixels, leading to unnecessary blurriness. Note that we usually need to recover the original shape afterward in both methods.

\n

My favorite code snippet for this task (symmetric padding for height/width):

\n
import torch\nimport torch.nn.functional as F\n\ndef pad_to(x, stride):\n    h, w = x.shape[-2:]\n\n    if h % stride > 0:\n        new_h = h + stride - h % stride\n    else:\n        new_h = h\n    if w % stride > 0:\n        new_w = w + stride - w % stride\n    else:\n        new_w = w\n    lh, uh = int((new_h-h) / 2), int(new_h-h) - int((new_h-h) / 2)\n    lw, uw = int((new_w-w) / 2), int(new_w-w) - int((new_w-w) / 2)\n    pads = (lw, uw, lh, uh)\n\n    # zero-padding by default.\n    # See others at https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.pad\n    out = F.pad(x, pads, "constant", 0)\n\n    return out, pads\n\ndef unpad(x, pad):\n    if pad[2]+pad[3] > 0:\n        x = x[:,:,pad[2]:-pad[3],:]\n    if pad[0]+pad[1] > 0:\n        x = x[:,:,:,pad[0]:-pad[1]]\n    return x\n
\n

A test snippet:

\n
x = torch.zeros(4, 3, 1080, 1920) # Raw data\nx_pad, pads = pad_to(x, 16) # Padded data, feed this to your network \nx_unpad = unpad(x_pad, pads) # Un-pad the network output to recover the original shape\n\nprint('Original: ', x.shape)\nprint('Padded: ', x_pad.shape)\nprint('Recovered: ', x_unpad.shape)\n
\n

Output:

\n
Original:  torch.Size([4, 3, 1080, 1920])\nPadded:  torch.Size([4, 3, 1088, 1920])\nRecovered:  torch.Size([4, 3, 1080, 1920])\n
\n

Reference: https://github.com/seoungwugoh/STM/blob/905f11492a6692dd0d0fa395881a8ec09b211a36/helpers.py#L33

\n", "system": "" }, { "instruction": "Difference between versions 9.2,10.1,10.2,11.0 of cuda for PyTorch 1.7", "input": "", "output": "

PyTorch doesn't use the system's CUDA installation when installed from a package manager (either conda or pip). Instead, it comes with a copy of the CUDA runtime and will work as long as your system is compatible with that version of PyTorch. By compatible I mean that the GPU supports the particular version of CUDA and the GPU's compute capability is one that the PyTorch binaries (for the selected version) are compiled with support for.

\n

Therefore the version reported by nvcc (the version installed on the system) is basically irrelevant. The version you should be looking at is

\n
import torch\n# print the version of CUDA being used by pytorch\nprint(torch.version.cuda)\n
\n

The only time the system's version of CUDA should matter is if you compiled PyTorch from source.

\n

As for which version of CUDA to select. You will probably want the newest version of CUDA that your system is compatible with. This is because newer versions generally include performance improvements compared to older versions.

\n", "system": "" }, { "instruction": "LSTM Autoencoder problems", "input": "", "output": "

Okay, after some debugging I think I know the reasons.

\n

TLDR

\n\n

Analysis

\n

Code used

\n

Let's start with the code (model is the same):

\n
import seaborn as sns\nimport matplotlib.pyplot as plt\n\ndef get_data(subtract: bool = False):\n    # (1, 14, 5)\n    input_tensor = torch.tensor(\n        [\n            [0.5122, 0.0360, 0.7027, 0.0721, 0.1892],\n            [0.5177, 0.0833, 0.6574, 0.1204, 0.1389],\n            [0.4643, 0.0364, 0.6242, 0.1576, 0.1818],\n            [0.4375, 0.0133, 0.5733, 0.1867, 0.2267],\n            [0.4838, 0.0625, 0.6042, 0.1771, 0.1562],\n            [0.4804, 0.0175, 0.6798, 0.1053, 0.1974],\n            [0.5030, 0.0445, 0.6712, 0.1438, 0.1404],\n            [0.4987, 0.0490, 0.6699, 0.1536, 0.1275],\n            [0.4898, 0.0388, 0.6704, 0.1330, 0.1579],\n            [0.4711, 0.0390, 0.5877, 0.1532, 0.2201],\n            [0.4627, 0.0484, 0.5269, 0.1882, 0.2366],\n            [0.5043, 0.0807, 0.6646, 0.1429, 0.1118],\n            [0.4852, 0.0606, 0.6364, 0.1515, 0.1515],\n            [0.5279, 0.0629, 0.6886, 0.1514, 0.0971],\n        ]\n    ).unsqueeze(0)\n\n    if subtract:\n        initial_values = input_tensor[:, 0, :]\n        input_tensor -= torch.roll(input_tensor, 1, 1)\n        input_tensor[:, 0, :] = initial_values\n    return input_tensor\n\n\nif __name__ == "__main__":\n    torch.manual_seed(0)\n\n    HIDDEN_SIZE = 10\n    SUBTRACT = False\n\n    input_tensor = get_data(SUBTRACT)\n    model = LSTMEncoderDecoder(input_tensor.shape[-1], HIDDEN_SIZE)\n    optimizer = torch.optim.Adam(model.parameters())\n    criterion = torch.nn.MSELoss()\n    for i in range(1000):\n        outputs = model(input_tensor)\n        loss = criterion(outputs, input_tensor)\n        loss.backward()\n        optimizer.step()\n        optimizer.zero_grad()\n        print(f"{i}: {loss}")\n        if loss < 1e-4:\n            break\n\n    # Plotting\n    sns.lineplot(data=outputs.detach().numpy().squeeze())\n    sns.lineplot(data=input_tensor.detach().numpy().squeeze())\n    plt.show()\n
\n

What it does:

\n\n

We will only vary HIDDEN_SIZE and SUBTRACT parameters!

\n

NO SUBTRACT, SMALL MODEL

\n\n

In this case we get a straight line. Model is unable to fit and grasp the phenomena presented in the data (hence flat lines you mentioned).

\n

\"enter

\n

1000 iterations limit reached

\n

SUBTRACT, SMALL MODEL

\n\n

Targets are now far from flat lines, but model is unable to fit due to too small capacity.

\n

\"enter

\n

1000 iterations limit reached

\n

NO SUBTRACT, LARGER MODEL

\n\n

It got a lot better and our target was hit after 942 steps. No more flat lines, model capacity seems quite fine (for this single example!)

\n

\"enter

\n

SUBTRACT, LARGER MODEL

\n\n

Although the graph does not look that pretty, we got to desired loss after only 215 iterations.

\n

\"enter

\n

Finally

\n\n

Questions

\n
\n

Okay, question 1: You are saying that for variable x in the time\nseries, I should train the model to learn x[i] - x[i-1] rather than\nthe value of x[i]? Am I correctly interpreting?

\n
\n

Yes, exactly. Difference removes the urge of the neural network to base it's predictions on the past timestep too much (by simply getting last value and maybe changing it a little)

\n
\n

Question 2: You said my calculations for zero bottleneck were\nincorrect. But, for example, let's say I'm using a simple dense\nnetwork as an auto encoder. Getting the right bottleneck indeed\ndepends on the data. But if you make the bottleneck the same size as\nthe input, you get the identity function.

\n
\n

Yes, assuming that there is no non-linearity involved which makes the thing harder (see here for similar case). In case of LSTMs there are non-linearites, that's one point.

\n

Another one is that we are accumulating timesteps into single encoder state. So essentially we would have to accumulate timesteps identities into a single hidden and cell states which is highly unlikely.

\n

One last point, depending on the length of sequence, LSTMs are prone to forgetting some of the least relevant information (that's what they were designed to do, not only to remember everything), hence even more unlikely.

\n
\n

Is num_features * num_timesteps not a bottle neck of the same size as\nthe input, and therefore shouldn't it facilitate the model learning\nthe identity?

\n
\n

It is, but it assumes you have num_timesteps for each data point, which is rarely the case, might be here. About the identity and why it is hard to do with non-linearities for the network it was answered above.

\n

One last point, about identity functions; if they were actually easy to learn, ResNets architectures would be unlikely to succeed. Network could converge to identity and make "small fixes" to the output without it, which is not the case.

\n
\n

I'm curious about the statement : "always use difference of timesteps\ninstead of timesteps" It seem to have some normalizing effect by\nbringing all the features closer together but I don't understand why\nthis is key ? Having a larger model seemed to be the solution and the\nsubstract is just helping.

\n
\n

Key here was, indeed, increasing model capacity. Subtraction trick depends on the data really. Let's imagine an extreme situation:

\n\n

What the neural network would do (what is the easiest here)? It would, probably, discard this 1 or smaller change as noise and just predict 1000 for all of them (especially if some regularization is in place), as being off by 1/1000 is not much.

\n

What if we subtract? Whole neural network loss is in the [0, 1] margin for each timestep instead of [0, 1001], hence it is more severe to be wrong.

\n

And yes, it is connected to normalization in some sense come to think about it.

\n", "system": "" }, { "instruction": "How to compute Hessian of the loss w.r.t. the parameters in PyTorch using autograd.grad", "input": "", "output": "

PyTorch recently-ish added a functional higher level API to torch.autograd which provides torch.autograd.functional.hessian(func, inputs,...) to directly evaluate the hessian of the scalar function func with respect to its arguments at a location specified by inputs, a tuple of tensors corresponding to the arguments of func. hessian itself does not support automatic differentiation, I believe.

\n

Note, however, that as of March 2021 it is still in beta.

\n
\n

Full example using torch.autograd.functional.hessian to create a score-test for non-zero mean (As a (bad) alternative to the one sample t-test):

\n
import numpy as np\nimport torch, torchvision\nfrom torch.autograd import Variable, grad\nimport torch.distributions as td\nimport math\nfrom torch.optim import Adam\nimport scipy.stats\n\n\nx_data = torch.randn(100)+0.0 # observed data (here sampled under H0)\n\nN = x_data.shape[0] # number of observations\n\nmu_null = torch.zeros(1)\nsigma_null_hat = Variable(torch.ones(1), requires_grad=True)\n\ndef log_lik(mu, sigma):\n  return td.Normal(loc=mu, scale=sigma).log_prob(x_data).sum()\n\n# Find theta_null_hat by some gradient descent algorithm (in this case an closed-form expression would be trivial to obtain (see below)):\nopt = Adam([sigma_null_hat], lr=0.01)\nfor epoch in range(2000):\n    opt.zero_grad() # reset gradient accumulator or optimizer\n    loss = - log_lik(mu_null, sigma_null_hat) # compute log likelihood with current value of sigma_null_hat  (= Forward pass)\n    loss.backward() # compute gradients (= Backward pass)\n    opt.step()      # update sigma_null_hat\n    \nprint(f'parameter fitted under null: sigma: {sigma_null_hat}, expected: {torch.sqrt((x_data**2).mean())}')\n#> parameter fitted under null: sigma: tensor([0.9260], requires_grad=True), expected: 0.9259940385818481\n\ntheta_null_hat = (mu_null, sigma_null_hat)\n\nU = torch.tensor(torch.autograd.functional.jacobian(log_lik, theta_null_hat)) # Jacobian (= vector of partial derivatives of log likelihood w.r.t. the parameters (of the full/alternative model)) = score\nI = -torch.tensor(torch.autograd.functional.hessian(log_lik, theta_null_hat)) / N # estimate of the Fisher information matrix\nS = torch.t(U) @ torch.inverse(I) @ U / N # test statistic, often named "LM" (as in Lagrange multiplier), would be zero at the maximum likelihood estimate\n\npval_score_test = 1 - scipy.stats.chi2(df = 1).cdf(S) # S asymptocially follows a chi^2 distribution with degrees of freedom equal to the number of parameters fixed under H0\nprint(f'p-value Chi^2-based score test: {pval_score_test}')\n#> p-value Chi^2-based score test: 0.9203232752568568\n\n# comparison with Student's t-test:\npval_t_test = scipy.stats.ttest_1samp(x_data, popmean = 0).pvalue\nprint(f'p-value Student\\'s t-test: {pval_t_test}')\n#> p-value Student's t-test: 0.9209265268946605\n
\n", "system": "" }, { "instruction": "Windows keeps crashing when trying to install PyTorch via pip", "input": "", "output": "

After troubling shooting and a lot of restart, it seems like the issue came from when pip was trying to load a pre-downloaded file. Essentially, the first time I ran the installation command, pip downloaded files for pytorch but did not install pytorch due to some user privilege issue. The fix is to add --no-cache-dir in the pip install command. This will override the cache (pre-downloaded files) and download the files all over again.

\n

For me specifically, I also needed to add --user.

\n

In other words, the command went from

\n
pip install torch===1.7.0+cu110 torchvision===0.8.1+cu110 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html\n
\n

to

\n
pip --no-cache-dir install torch===1.7.0+cu110 torchvision===0.8.1+cu110 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html --user\n
\n", "system": "" }, { "instruction": "How to get the total number of batch iteration from pytorch dataloader?", "input": "", "output": "

len(dataloader) returns the total number of batches. It depends on the __len__ function of your dataset, so make sure it is set correctly.

\n", "system": "" }, { "instruction": "What does TensorFlow's `conv2d_transpose()` operation do?", "input": "", "output": "

This is the best explanation I've seen online how convolution transpose works is here.

\n\n

I'll give my own short description. It applies convolution with a fractional stride. In other words spacing out the input values (with zeroes) to apply the filter over a region that's potentially smaller than the filter size.

\n\n

As for the why one would want to use it. It can be used as a sort of upsampling with learned weights as opposed to bilinear interpolation or some other fixed form of upsampling.

\n", "system": "" }, { "instruction": "How can I visualize the weights(variables) in cnn in Tensorflow?", "input": "", "output": "

To visualize the weights, you can use a tf.image_summary() op to transform a convolutional filter (or a slice of a filter) into a summary proto, write them to a log using a tf.train.SummaryWriter, and visualize the log using TensorBoard.

\n\n

Let's say you have the following (simplified) program:

\n\n
filter = tf.Variable(tf.truncated_normal([8, 8, 3]))\nimages = tf.placeholder(tf.float32, shape=[None, 28, 28])\n\nconv = tf.nn.conv2d(images, filter, strides=[1, 1, 1, 1], padding=\"SAME\")\n\n# More ops...\nloss = ...\noptimizer = tf.GradientDescentOptimizer(0.01)\ntrain_op = optimizer.minimize(loss)\n\nfilter_summary = tf.image_summary(filter)\n\nsess = tf.Session()\nsummary_writer = tf.train.SummaryWriter('/tmp/logs', sess.graph_def)\nfor i in range(10000):\n  sess.run(train_op)\n  if i % 10 == 0:\n    # Log a summary every 10 steps.\n    summary_writer.add_summary(filter_summary, i)\n
\n\n

After doing this, you can start TensorBoard to visualize the logs in /tmp/logs, and you will be able to see a visualization of the filter.

\n\n

Note that this trick visualizes depth-3 filters as RGB images (to match the channels of the input image). If you have deeper filters, or they don't make sense to interpret as color channels, you can use the tf.split() op to split the filter on the depth dimension, and generate one image summary per depth.

\n", "system": "" }, { "instruction": "Error while importing Tensorflow in Python 2.7 in Ubuntu 12.04. 'GLIBC_2.17 not found'", "input": "", "output": "

I've just managed to install tensorflow 0.12rc0 on CentOS 6.5 with glibc 2.12, without having root privileges. Simply installing tensorflow binary via pip was giving me an error, related to GLIBC version as well.

\n\n

Basically, you have 4 options how to deal with this (each with some advantages and disadvantages):

\n\n

Option 1 - Upgrade your system GLIBC globally.

\n\n

This is, probably, the best option, if your system supports this, you have root privileges, and you are confident that this upgrade won't break anything for some weird reason. Ultimately, this goes up to upgrading the whole Linux distribution. Here's a nice short list of default GLIBC versions on popular distributions.

\n\n

Option 2 - Add second GLIBC to your system

\n\n

Compile or download binary. The most simple&straightforward option. Especially if you only need to run few simple scripts.

\n\n\n\n

Option 3 - Patch tensorflow

\n\n

This may work for TF 0.6.0, but you would probably have to start again from scratch, when each new tensorflow version is released. E.g. here's a fix for 0.9.0.

\n\n

Option 4 - Compile tensorflow from source

\n\n

If you re-compile it from source and link against your existing GLIBC, newer GLIBC would be no longer needed. Somehow, this option was not mentioned in any answer here yet. Imho, this is the best option, both \"in general\", and \"specifically for tensorflow\".

\n\n\n\n

A quick summary of \"building tensorflow on outdated system\":

\n\n

Although the official guide provides a \"installing from sources\" section, there are few tricks you need to do to build it on an outdated system. Here I assume, that you do not have root privileges (if you do - you probably would be able to install the same pre-requestities with a package manager, rather them manually building them from source).

\n\n

I found two well-documented success stories: #1, #2 and a number of useful posts on the official github (mostly about a set of libraries to link inside the binary): #1, #2, #3, #4. I had to combine tricks, described there to successfully compile TF in my case.

\n\n
    \n
  1. First of all, check your gcc --version, and verify that it supports c++11. Mine was 4.4.7, so it won't work. I've downloaded gcc-4.9.4 source code, and compiled it. This step is pretty straightforward, but the compilation itself may take few hours. As a workaround for an issue in bazel, I've compiled gcc with hardcoded paths to as,ld and nm. However, you may try another workarounds: (1, 2).

    \n\n
    #!/bin/sh\n\nunset LIBRARY_PATH CPATH C_INCLUDE_PATH \nunset PKG_CONFIG_PATH CPLUS_INCLUDE_PATH INCLUDE LD_LIBRARY_PATH\n\ncd gcc-4.9.4\n./contrib/download_prerequisites\n\nmkdir objdir\ncd objdir\n\n\n# I've added --disable-multilib to fix the following error:\n# /usr/bin/ld: crt1.o: No such file: No such file or directory\n# collect2: ld returned 1 exit status\n# configure: error: I suspect your system does not have 32-bit \n# developement libraries (libc and headers). If you have them,\n# rerun configure with --enable-multilib. If you do not have them, \n# and want to build a 64-bit-only compiler, rerun configure \n# with --disable-multilib.           \n\n../configure --prefix=$HOME/opt/gcc-4.9.4 \\\n             --disable-multilib \\\n             --disable-nls \\\n             --enable-languages=c,c++ \\\n             --with-ld=/usr/bin/ld \\\n             --with-nm=/usr/bin/nm \\\n             --with-as=/usr/bin/as\n\nmake        \nmake install\n
  2. \n
  3. Check your java --version. Bazel requires JDK 8, install it if necessary. (They still provide some jdk7 related downloads, for bazel-0.4.1 but it looks like they consider it deprecated)

  4. \n
  5. I've created a separate use_gcc_4.9.4.sh file, with necessary environment variables. I use source ./use_gcc_4.9.4.sh when I need to so something related to this newer compiler.

    \n\n
    #!/bin/sh\nthis=$HOME/opt/gcc-4.9.4\nexport PATH=$this/bin:$PATH\nexport CPATH=$this/include:$CPATH\nexport LIBRARY_PATH=$this/lib:$LIBRARY_PATH\nexport LIBRARY_PATH=$this/lib64:$LIBRARY_PATH\nexport LD_LIBRARY_PATH=$this/lib:$LD_LIBRARY_PATH\nexport LD_LIBRARY_PATH=$this/lib64:$LD_LIBRARY_PATH\n
  6. \n
  7. The current bazel binary (0.4.1) requires GLIBC 2.14, so we have to compile bazel from source as well (with our new gcc). Works OK, unless you are only allowed to run a very limited number of threads on the target machine. (This post describes some additional workarounds, but in my case they were not needed, maybe due to recent updates in bazel code.)

  8. \n
  9. Obtain tensorflow source code git clone https://github.com/tensorflow/tensorflow, and install prerequisites you need (CUDA,cuDNN,python, etc). See official guide.

  10. \n
  11. If you're not using default system gcc (e.g. if you had to compile newer gcc, like discussed above), add the following linker flags to tensorflow/third_party/gpus/crosstool/CROSSTOOL.tpl, line 59:

    \n\n
    linker_flag: \"-L/home/username/localinst/opt/gcc-4.9.4/lib64\"\nlinker_flag: \"-Wl,-rpath,/home/username/localinst/opt/gcc-4.9.4/lib64\"\n
    \n\n

    Without this step, you would likely run into error messages like this:

    \n\n
    # ERROR: /home/username/localdistr/src/tensorflow/tensorflow/tensorflow/core/debug/BUILD:33:1: null failed: protoc failed: error executing command bazel-out/host/bin/external/protobuf/protoc '--cpp_out=bazel-out/local_linux-py3-opt/genfiles/' '--plugin=protoc-gen-grpc=bazel-out/host/bin/external/grpc/grpc_cpp_plugin' ... (remaining 8 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.\n# bazel-out/host/bin/external/protobuf/protoc: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by bazel-out/host/bin/external/protobuf/protoc)\n# bazel-out/host/bin/external/protobuf/protoc: /usr/lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by bazel-out/host/bin/external/protobuf/protoc)\n# bazel-out/host/bin/external/protobuf/protoc: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.18' not found (required by bazel-out/host/bin/external/protobuf/protoc)\n
  12. \n
  13. Finally, to avoid GLIBC dependencies, we have to statically link some libraries, by adding the -lrt linker flag (maybe -lm as well). I found multiple posts, suggesting to add this in a different manner:

    \n\n\n\n

    Without -lrt I ran into GLIBC-version-specific error again, trying to import tensorflow:

    \n\n
    # ImportError: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /home/username/anaconda3/envs/myenvname/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so)\n
    \n\n

    Without -lm you may run into this (for me, it turned out to be not necessary).

  14. \n
  15. Run the build process.

  16. \n
\n\n
    source ./use_gcc_4.9.4.sh\n    ./configure\n    bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package\n    bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg\n    pip install --upgrade /tmp/tensorflow_pkg/tensorflow-0.12.0rc0-cp35-cp35m-linux_x86_64.whl\n
\n\n
    \n
  1. Try to run the following simple python script to test if the most basic stuff is functioning:

    \n\n
    import tensorflow as tf\nhello = tf.constant('Hello, TensorFlow!')\nsess = tf.Session()\nprint(sess.run(hello))\n\na = tf.constant(10)\nb = tf.constant(32)\nprint(sess.run(a + b))\n
  2. \n
\n", "system": "" }, { "instruction": "Error importing tensorflow "AlreadyExistsError: Another metric with the same name already exists."", "input": "", "output": "

TL;DR: Ensure the Keras version matches the Tensorflow version

\n

I am experiencing the same thing with:

\n\n

The core issue appears to be that there are two Keras packages installed:

\n
<site-packages>/keras\n<site-packages/tensorflow/python/keras\n
\n

If you look at the release notes for 2.6:\nhttps://github.com/tensorflow/tensorflow/releases/tag/v2.6.0

\n
\n

Keras been split into a separate PIP package (keras), and its code has\nbeen moved to the GitHub repositorykeras-team/keras. The API endpoints\nfor tf.keras stay unchanged, but are now backed by the keras PIP\npackage. The existing code in tensorflow/python/keras is a staled copy\nand will be removed in future release (2.7). Please remove any imports\nto tensorflow.python.keras and replace them with public tf.keras API\ninstead.

\n
\n

For some reason, it is still importing from both packages which is triggering the valid exception (only one Keras instance should be imported)

\n

Digging a bit further, it looks like Keras-2.7 was being installed, reverting to Keras-2.6 resolved the issue:

\n
pip install keras==2.6.*\n
\n

For some reason:\nhttps://github.com/tensorflow/tensorflow/blob/v2.6.1/tensorflow/tools/pip_package/setup.py#L106

\n

Is not working, maybe a bug in PIP?

\n", "system": "" }, { "instruction": "Difference between installation libraries of Tensorflow GPU vs CPU", "input": "", "output": "

Updated answer 2023 (Tensorflow 2.x and above:)

\n

Verify the CPU setup:

\n
python3 -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([100, 100])))"\n
\n

If a tensor is returned, you've installed TensorFlow successfully.

\n

Verify the GPU setup:

\n
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"\n
\n

If a list of GPU devices is returned, you've installed TensorFlow successfully.

\n

Source: Tensorflow installation guide

\n

Old answer(Tensorflow 1.x):

\n

One thing to Note: CUDA can be installed even if you don't have a GPU in your system.

\n

For packages tensorflow and tensorflow-gpu I hope this clears the confusion. yes/no means "Will the package work out of the box when executing import tensorflow as tf"? Here are the differences:

\n
| Support for TensorFlow libraries | tensorflow | tensorflow-gpu  |\n| for hardware type:               |    tf      |     tf-gpu      |\n|----------------------------------|------------|-----------------|\n| cpu-only                         |    yes     |   no (~tf-like) |\n| gpu with cuda+cudnn installed    |    yes     |   yes           |\n| gpu without cuda+cudnn installed |    yes     |   no (~tf-like) |\n
\n

Edit: Confirmed the no answers on a cpu-only system and the gpu without cuda+cudnn installed (by removing CUDA+CuDNN env variables).

\n

~tf-like means even though the library is tensorflow-gpu, it would behave like tensorflow library.

\n", "system": "" }, { "instruction": "Tensorflow : logits and labels must have the same first dimension", "input": "", "output": "

The problem is in your target shape and is related to the correct choice of an appropriate loss function. you have 2 possibilities:

\n

1. possibility: if you have 1D integer encoded target, you can use sparse_categorical_crossentropy as loss function

\n
n_class = 3\nn_features = 100\nn_sample = 1000\n\nX = np.random.randint(0,10, (n_sample,n_features))\ny = np.random.randint(0,n_class, n_sample)\n\ninp = Input((n_features,))\nx = Dense(128, activation='relu')(inp)\nout = Dense(n_class, activation='softmax')(x)\n\nmodel = Model(inp, out)\nmodel.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])\nhistory = model.fit(X, y, epochs=3)\n
\n

2. possibility: if you have one-hot encoded your target in order to have 2D shape (n_samples, n_class), you can use categorical_crossentropy

\n
n_class = 3\nn_features = 100\nn_sample = 1000\n\nX = np.random.randint(0,10, (n_sample,n_features))\ny = pd.get_dummies(np.random.randint(0,n_class, n_sample)).values\n\ninp = Input((n_features,))\nx = Dense(128, activation='relu')(inp)\nout = Dense(n_class, activation='softmax')(x)\n\nmodel = Model(inp, out)\nmodel.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])\nhistory = model.fit(X, y, epochs=3)\n
\n", "system": "" }, { "instruction": "Use shared GPU memory with TensorFlow?", "input": "", "output": "

Shared memory is an area of the main system RAM reserved for graphics. References:

\n\n

https://en.wikipedia.org/wiki/Shared_graphics_memory

\n\n

https://www.makeuseof.com/tag/can-shared-graphics-finally-compete-with-a-dedicated-graphics-card/

\n\n

https://youtube.com/watch?v=E5WyJY1zwcQ

\n\n

This type of memory is what integrated graphics eg Intel HD series typically use.

\n\n

This is not on your NVIDIA GPU, and CUDA can't use it. Tensorflow can't use it when running on GPU because CUDA can't use it, and also when running on CPU because it's reserved for graphics.

\n\n

Even if CUDA could use it somehow. It won't be useful because system RAM bandwidth is around 10x less than GPU memory bandwidth, and you have to somehow get the data to and from the GPU over the slow (and high latency) PCIE bus.

\n\n

Bandwidth numbers for reference :\nGeForce GTX 980: 224 GB/s\nDDR4 on desktop motherboard: approx 25GB/s\nPCIe 16x: 16GB/s

\n\n

This doesn't take into account latency. In practice, running a GPU compute task on data which is too big to fit in GPU memory and has to be transferred over PCIe every time it is accessed is so slow for most types of compute that doing the same calculation on CPU would be much faster.

\n\n

Why do you see that kind of memory being allocated when you have a NVIDIA card in your machine? Good question. I can think of a couple of possibilities:

\n\n

(a) You have both NVIDIA and Intel graphics drivers active (eg as happens when running different displays on both). Uninstaller the Intel drivers and/or disable Intel HD graphics in the BIOS and shared memory will disappear.

\n\n

(b) NVIDIA is using it. This may be eg extra texture memory, etc. It could also not be real memory but just a memory mapped area that corresponds to GPU memory. Look in the advanced settings of the NVIDIA driver for a setting that controls this.

\n\n

In any case, no, there isn't anything that Tensorflow can use.

\n", "system": "" }, { "instruction": "Where should pre-processing and post-processing steps be executed when a TF model is served using TensorFlow serving?", "input": "", "output": "

I'm running over the same issue here, even if I'm not 100% sure yet on how to use the wordDict variable (I guess you use one too to map the words with its ids), the main pre-process and post-process functions are defined here:

\n\n

https://www.tensorflow.org/programmers_guide/saved_model

\n\n

as export_outputs and serving_input_receiver_fn.

\n\n\n\n

Needs to be defined in EstimatorSpec if you are using estimators. Here is an example for a classification algorithm

\n\n
  predicted_classes = tf.argmax(logits, 1)\n  categories_tensor = tf.convert_to_tensor(CATEGORIES, tf.string)\n  export_outputs = { \"categories\": export_output.ClassificationOutput(classes=categories_tensor) }\n  if mode == tf.estimator.ModeKeys.PREDICT:\n    return tf.estimator.EstimatorSpec(\n        mode=mode,\n        predictions={\n            'class': predicted_classes,\n            'prob': tf.nn.softmax(logits)\n        },\n        export_outputs=export_outputs)\n
\n\n\n\n

It needs to be defined on before exporting the trained estimator model, it assumes the input is a raw string and parses your input from there, you can write your own function but I'm unsure whenever you can use external variables. Here is a simple example for a classification algorithm:

\n\n
def serving_input_receiver_fn():\n    feature_spec = { \"words\": tf.FixedLenFeature(dtype=tf.int64, shape=[4]) }\n    return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)()\n\n  export_dir = classifier.export_savedmodel(export_dir_base=args.job_dir,\n                                            serving_input_receiver_fn=serving_input_receiver_fn)\n
\n\n

hope it helps.

\n", "system": "" }, { "instruction": "How to disable printing reports after each epoch in Keras?", "input": "", "output": "

Set verbose=0 to the fit method of your model.

\n", "system": "" }, { "instruction": "What is the relationship between steps and epochs in TensorFlow?", "input": "", "output": "

TL;DR: An epoch is when your model goes through your whole training data once. A step is when your model trains on a single batch (or a single sample if you send samples one by one). Training for 5 epochs on a 1000 samples 10 samples per batch will take 500 steps.

\n\n

The contrib.learn.io module is not documented very well, but it seems that numpy_input_fn() function takes some numpy arrays and batches them together as input for a classificator. So, the number of epochs probably means \"how many times to go through the input data I have before stopping\". In this case, they feed two arrays of length 4 in 4 element batches, so it will just mean that the input function will do this at most a 1000 times before raising an \"out of data\" exception. The steps argument in the estimator fit() function is how many times should estimator do the training loop. This particular example is somewhat perverse, so let me make up another one to make things a bit clearer (hopefully).

\n\n

Lets say you have two numpy arrays (samples and labels) that you want to train on. They are a 100 elements each. You want your training to take batches with 10 samples per batch. So after 10 batches you will go through all of your training data. That is one epoch. If you set your input generator to 10 epochs, it will go through your training set 10 times before stopping, that is it will generate at most a 100 batches.

\n\n

Again, the io module is not documented, but considering how other input related APIs in tensorflow work, it should be possible to make it generate data for unlimited number of epochs, so the only thing controlling the length of training are going to be the steps. This gives you some extra flexibility on how you want your training to progress. You can go a number of epochs at a time or a number of steps at a time or both or whatever.

\n", "system": "" }, { "instruction": "Faster RCNN for TensorFlow", "input": "", "output": "

Tensorflow has just released an official Object Detection API here, that can be used for instance with their various slim models.

\n\n

This API contains implementation of various Pipelines for Object Detection, including popular Faster RCNN, with their pre-trained models as well.

\n", "system": "" }, { "instruction": "Tensor flow toggle between CPU/GPU", "input": "", "output": "

To make GPU invisible

\n\n
export CUDA_VISIBLE_DEVICES=\"\"\n
\n\n

To return to normal

\n\n
unset CUDA_VISIBLE_DEVICES\n
\n", "system": "" }, { "instruction": "Choosing between GeForce or Quadro GPUs to do machine learning via TensorFlow", "input": "", "output": "

I think GeForce TITAN is great and is widely used in Machine Learning (ML). In ML, single precision is enough in most of cases.

\n\n

More detail on the performance of the GTX line (currently GeForce 10) can be found in Wikipedia, here.

\n\n

Other sources around the web support this claim. Here is a quote from doc-ok in 2013 (permalink).

\n\n
\n

For comparison, an \u201centry-level\u201d $700 Quadro 4000 is significantly slower than a $530 high-end GeForce GTX 680, at least according to my measurements using several Vrui applications, and the closest performance-equivalent to a GeForce GTX 680 I could find was a Quadro 6000 for a whopping $3660.

\n
\n\n

Specific to ML, including deep learning, there is a Kaggle forum discussion dedicated to this subject (Dec 2014, permalink), which goes over comparisons between the Quadro, GeForce, and Tesla series:

\n\n
\n

Quadro GPUs aren't for scientific computation, Tesla GPUs are. Quadro\n cards are designed for accelerating CAD, so they won't help you to\n train neural nets. They can probably be used for that purpose just\n fine, but it's a waste of money.

\n \n

Tesla cards are for scientific computation, but they tend to be pretty\n expensive. The good news is that many of the features offered by Tesla\n cards over GeForce cards are not necessary to train neural networks.

\n \n

For example, Tesla cards usually have ECC memory, which is nice to\n have but not a requirement. They also have much better support for\n double precision computations, but single precision is plenty for\n neural network training, and they perform about the same as GeForce\n cards for that.

\n \n

One useful feature of Tesla cards is that they tend to have is a lot\n more RAM than comparable GeForce cards. More RAM is always welcome if\n you're planning to train bigger models (or use RAM-intensive\n computations like FFT-based convolutions).

\n \n

If you're choosing between Quadro and GeForce, definitely pick\n GeForce. If you're choosing between Tesla and GeForce, pick GeForce,\n unless you have a lot of money and could really use the extra RAM.

\n
\n\n

NOTE: Be careful what platform you are working on and what the default precision is in it. For example, here in the CUDA forums (August 2016), one developer owns two Titan X's (GeForce series) and doesn't see a performance gain in any of their R or Python scripts. This is diagnosed as a result of R being defaulted to double precision, and has a worse performance on new GPU than their CPU (a Xeon processor). Tesla GPUs are cited as the best performance for double precision. In this case, converting all numbers to float32 increases performance from 12.437s with nvBLAS 0.324s with gmatrix+float32s on one TITAN X (see first benchmark). Quoting from this forum discussion:

\n\n
\n

Double precision performance of Titan X is pretty low.

\n
\n", "system": "" }, { "instruction": "Where is the folder for Installing tensorflow with pip, Mac OSX?", "input": "", "output": "

Installing with pip, installs the packages to the directory \"site-packages\".

\n\n

The following code shows the location of tensorflow as well as where pip installs the packages:

\n\n
$ pip show tensorflow\n
\n\n

Which return:

\n\n
Metadata-Version: 2.0\nName: tensorflow\nVersion: 0.5.0\nSummary: TensorFlow helps the tensors flow\nHome-page: http://tensorflow.com/\nAuthor: Google Inc.\nAuthor-email: opensource@google.com\nLicense: Apache 2.0\nLocation: /usr/local/lib/python2.7/site-packages\nRequires: six, numpy\n
\n\n

here Location: shows where the package is installed with

\n\n
$ cd /usr/local/lib/python2.7/site-packages/tensorflow\n
\n\n
\n\n

EDIT:

\n\n

As some people pointed out in the newer versions of tensorflow and depending on the $ echo $TENSORFLOW you need to look in either

\n\n
$ cd /usr/local/lib/python{2,3}.X/{site,dist}-packages/tensorflow\n
\n\n

Or

\n\n
$ cd /usr/local/lib/python2.7/dist-packages/tensorflow/include/tensorflow/core/framework\n
\n", "system": "" }, { "instruction": "AttributeError: module 'setuptools._distutils' has no attribute 'version'", "input": "", "output": "

This is a known bug which has been patched: https://github.com/pytorch/pytorch/pull/69904

\n

You can either use the nightly-release of PyTorch, or otherwise downgrade setup tools to setuptools version 59.5.0:

\n

pip install setuptools==59.5.0

\n", "system": "" }, { "instruction": "Tensorflow tf.data AUTOTUNE", "input": "", "output": "

tf.data builds a performance model of the input pipeline and runs an optimization algorithm to find a good allocation of its CPU budget across all parameters specified as AUTOTUNE. While the input pipeline is running, tf.data tracks the time spent in each operation, so that these times can be fed into the optimization algorithm.

\n

The OptimizationOptions object gives some control over how autotune will behave.

\n", "system": "" }, { "instruction": "Machine Learning : Tensorflow v/s Tensorflow.js v/s Brain.js", "input": "", "output": "

The speeds are different: Tensorflow > tfjs > brainjs. Python can be directly compiled to machine code and directly use the CPU and GPU, whereas tfjs is a script-language which is being compiled on the client and has to use the <canvas> in the browser to access the GPU the same as brain.js (I am not sure if brain.js is GPU-accelerated)

\n\n

Another thing is that tensorflow is a whole ecosystem, which is kept in sync with each different version for the different platforms, so it is really easy to port your python(keras) model to tfjs and if you know how to code a tensorflow-model you can do it in any language.

\n\n

And if you're using nodejs the only reason to stay with tfjs and not switch to python is that you like the JavaScript language better or you are forced to use because you are working in a JS backend.

\n\n

PS:\nA new library was just released (ML5), which is a wrapper for tfjs and adds a lot of stuff, which helps you to build and use models without having a deep machine learning background.

\n", "system": "" }, { "instruction": "What is the definition of a non-trainable parameter?", "input": "", "output": "

In keras, non-trainable parameters (as shown in model.summary()) means the number of weights that are not updated during training with backpropagation.

\n\n

There are mainly two types of non-trainable weights:

\n\n\n\n

Weights are the values inside the network that perform the operations and can be adjusted to result in what we want. The backpropagation algorithm changes the weights towards a lower error at the end.

\n\n

By default, all weights in a keras model are trainable.

\n\n

When you create layers, internally it creates its own weights and they're trainable. (The backpropagation algorithm will update these weights)

\n\n

When you make them untrainable, the algorithm will not update these weights anymore. This is useful, for instance, when you want a convolutional layer with a specific filter, like a Sobel filter, for instance. You don't want the training to change this operation, so these weights/filters should be kept constant.

\n\n

There is a lot of other reasons why you might want to make weights untrainable.

\n\n
\n\n

Changing parameters:

\n\n

For deciding whether weights are trainable or not, you take layers from the model and set trainable:

\n\n
model.get_layer(layerName).trainable = False #or True\n
\n\n

This must be done before compilation.

\n", "system": "" }, { "instruction": "What is the difference between model.fit() an model.evaluate() in Keras?", "input": "", "output": "

fit() is for training the model with the given inputs (and corresponding training labels).

\n\n

evaluate() is for evaluating the already trained model using the validation (or test) data and the corresponding labels. Returns the loss value and metrics values for the model.

\n\n

predict() is for the actual prediction. It generates output predictions for the input samples.

\n\n

Let us consider a simple regression example:

\n\n
# input and output\nx = np.random.uniform(0.0, 1.0, (200))\ny = 0.3 + 0.6*x + np.random.normal(0.0, 0.05, len(y))\n
\n\n

\"enter

\n\n

Now lets apply a regression model in keras:

\n\n
# A simple regression model\nmodel = Sequential()\nmodel.add(Dense(1, input_shape=(1,)))\nmodel.compile(loss='mse', optimizer='rmsprop')\n\n# The fit() method - trains the model\nmodel.fit(x, y, nb_epoch=1000, batch_size=100)\n\nEpoch 1000/1000\n200/200 [==============================] - 0s - loss: 0.0023\n\n# The evaluate() method - gets the loss statistics\nmodel.evaluate(x, y, batch_size=200)     \n# returns: loss: 0.0022612824104726315\n\n# The predict() method - predict the outputs for the given inputs\nmodel.predict(np.expand_dims(x[:3],1)) \n# returns: [ 0.65680361],[ 0.70067143],[ 0.70482892]\n
\n", "system": "" }, { "instruction": "Keras: How to get layer shapes in a Sequential model", "input": "", "output": "

If you want the output printed in a fancy way:

\n\n
model.summary()\n
\n\n

If you want the sizes in an accessible form

\n\n
for layer in model.layers:\n    print(layer.get_output_at(0).get_shape().as_list())\n
\n\n

There are probably better ways to access the shapes than this. Thanks to Daniel for the inspiration.

\n", "system": "" }, { "instruction": "After building TensorFlow from source, seeing libcudart.so and libcudnn errors", "input": "", "output": "

First, for the following error:

\n\n
\n

ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory

\n
\n\n

make sure your LD_LIBRARY_PATH includes your lib64 directory in whichever path you installed your cuda package in. You can do this by adding an export line in your .bashrc. For Omar, it looked like the following:

\n\n
\n

I fixed this just adding the cuda path to my .bashrc

\n \n
\n

export LD_LIBRARY_PATH=/usr/local/cuda/lib64/

\n
\n
\n\n
\n\n

For me, I had to do Omar's line and also:\n export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64/\nbecause I have two directories involving cuda (probably not the best).

\n\n
\n\n

Second, are you sure you installed cuDNN? Note that this is different from the regular cuda package. You will need to register, then download and install the package from the following page:\nhttps://developer.nvidia.com/cudnn

\n\n
\n\n

Third, I had this same problem:

\n\n
\n

ImportError: libcudnn.5: cannot open shared object file: No such file or directory

\n
\n\n

It turns out there is no libcudnn.5 in my /usr/local/cuda/lib64 or /usr/local/cuda-8.0/lib64 directories. However, I do have a libcudnn.so.6.* file. To solve the problem, I created a soft link:

\n\n
ln -s libcudnn.so.6.* libcudnn.so.5\n
\n\n

in my /usr/local/cuda/lib64 directory. Now everything works for me. Your directory might be different if you already had cuDNN, and your libcudnn.so.6.* might be a different version, so check that.

\n", "system": "" }, { "instruction": "How does Keras define "accuracy" and "loss"?", "input": "", "output": "

Have a look at metrics.py, there you can find definition of all available metrics including different types of accuracy. Accuracy is not printed unless you add it to the list of desired metrics when you compile your model.

\n\n

Regularizers are by definition added to the loss. For example, see add_loss method of the Layerclass.

\n\n

Update

\n\n

The type of accuracy is determined based on the objective function, see training.py. The default choice is categorical_accuracy. Other types like binary_accuracy and sparse_categorical_accuracy are selected when the objective function is either binary or sparse.

\n", "system": "" }, { "instruction": "TensorFlow: numpy.repeat() alternative", "input": "", "output": "

You can achieve the effect of np.repeat() using a combination of tf.tile() and tf.reshape():

\n\n
idx = tf.range(len(yp))\nidx = tf.reshape(idx, [-1, 1])    # Convert to a len(yp) x 1 matrix.\nidx = tf.tile(idx, [1, len(yp)])  # Create multiple columns.\nidx = tf.reshape(idx, [-1])       # Convert back to a vector.\n
\n\n

You can simply compute jdx using tf.tile():

\n\n
jdx = tf.range(len(yp))\njdx = tf.tile(jdx, [len(yp)])\n
\n\n

For the indexing, you could try using tf.gather() to extract non-contiguous slices from the yp tensor:

\n\n
s = tf.gather(yp, idx) - tf.gather(yp, jdx)\n
\n", "system": "" }, { "instruction": "After building TensorFlow from source, seeing libcudart.so and libcudnn errors", "input": "", "output": "

First, for the following error:

\n\n
\n

ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory

\n
\n\n

make sure your LD_LIBRARY_PATH includes your lib64 directory in whichever path you installed your cuda package in. You can do this by adding an export line in your .bashrc. For Omar, it looked like the following:

\n\n
\n

I fixed this just adding the cuda path to my .bashrc

\n \n
\n

export LD_LIBRARY_PATH=/usr/local/cuda/lib64/

\n
\n
\n\n
\n\n

For me, I had to do Omar's line and also:\n export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64/\nbecause I have two directories involving cuda (probably not the best).

\n\n
\n\n

Second, are you sure you installed cuDNN? Note that this is different from the regular cuda package. You will need to register, then download and install the package from the following page:\nhttps://developer.nvidia.com/cudnn

\n\n
\n\n

Third, I had this same problem:

\n\n
\n

ImportError: libcudnn.5: cannot open shared object file: No such file or directory

\n
\n\n

It turns out there is no libcudnn.5 in my /usr/local/cuda/lib64 or /usr/local/cuda-8.0/lib64 directories. However, I do have a libcudnn.so.6.* file. To solve the problem, I created a soft link:

\n\n
ln -s libcudnn.so.6.* libcudnn.so.5\n
\n\n

in my /usr/local/cuda/lib64 directory. Now everything works for me. Your directory might be different if you already had cuDNN, and your libcudnn.so.6.* might be a different version, so check that.

\n", "system": "" }, { "instruction": "How to convert keras(h5) file to a tflite file?", "input": "", "output": "
from tensorflow.contrib import lite\nconverter = lite.TFLiteConverter.from_keras_model_file( 'model.h5')\ntfmodel = converter.convert()\nopen (\"model.tflite\" , \"wb\") .write(tfmodel)\n
\n\n

You can use the TFLiteConverter to directly convert .h5 files to .tflite file.\nThis does not work on Windows.

\n\n

For Windows, use this Google Colab notebook to convert. Upload the .h5 file and it will convert it .tflite file.

\n\n

Follow, if you want to try it yourself :

\n\n
    \n
  1. Create a Google Colab Notebook. In the left top corner, click the \"UPLOAD\" button and upload your .h5 file.
  2. \n
  3. Create a code cell and insert this code.

    \n\n
    from tensorflow.contrib import lite\nconverter = lite.TFLiteConverter.from_keras_model_file( 'model.h5' ) # Your model's name\nmodel = converter.convert()\nfile = open( 'model.tflite' , 'wb' ) \nfile.write( model )\n
  4. \n
  5. Run the cell. You will get a model.tflite file. Right click on the file and select \"DOWNLOAD\" option.

  6. \n
\n", "system": "" }, { "instruction": "Tensorflow: ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory", "input": "", "output": "

Found the solution:

\n\n

I reinstalled nvidia-381, CUDA-8.0 (using the runfile) and cuDNN 6.0. Then I added the following in my .bashrc:

\n\n
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64/\n
\n", "system": "" }, { "instruction": "ImportError: cannot import name 'get_config' from 'tensorflow.python.eager.context'", "input": "", "output": "

Instead of:

\n
import keras\n
\n

Try:

\n
from tensorflow import keras \n
\n", "system": "" }, { "instruction": "Why Bert transformer uses [CLS] token for classification instead of average over all tokens?", "input": "", "output": "

BERT is designed primarily for transfer learning, i.e., finetuning on task-specific datasets. If you average the states, every state is averaged with the same weight: including stop words or other stuff that are not relevant for the task. The [CLS] vector gets computed using self-attention (like everything in BERT), so it can only collect the relevant information from the rest of the hidden states. So, in some sense the [CLS] vector is also an average over token vectors, only more cleverly computed, specifically for the tasks that you fine-tune on.

\n

Also, my experience is that when I keep the weights fixed and do not fine-tune BERT, using the token average yields better results.

\n", "system": "" }, { "instruction": "parallelising tf.data.Dataset.from_generator", "input": "", "output": "

\nTurns out I can use Dataset.map if I make the generator super lightweight (only generating meta data) and then move the actual heavy lighting into a stateless function. This way I can parallelise just the heavy lifting part with .map using a py_func.

\n\n

Works; but feels a tad clumsy... Would be great to be able to just add num_parallel_calls to from_generator :)

\n\n
def pure_numpy_and_pil_complex_calculation(metadata, label):\n  # some complex pil and numpy work nothing to do with tf\n  ...\n\ndataset = tf.data.Dataset.from_generator(lightweight_generator,\n                                         output_types=(tf.string,   # metadata\n                                                       tf.string))  # label\n\ndef wrapped_complex_calulation(metadata, label):\n  return tf.py_func(func = pure_numpy_and_pil_complex_calculation,\n                    inp = (metadata, label),\n                    Tout = (tf.uint8,    # (H,W,3) img\n                            tf.string))  # label\ndataset = dataset.map(wrapped_complex_calulation,\n                      num_parallel_calls=8)\n\ndataset = dataset.batch(64)\niter = dataset.make_one_shot_iterator()\nimgs, labels = iter.get_next()\n
\n", "system": "" }, { "instruction": "CUDA_HOME path for Tensorflow", "input": "", "output": "

Run the following command in the terminal:

\n\n
export CUDA_HOME=/usr/local/cuda-X.X\n
\n\n

Where you replace X.X by the first two digits of your version number (can be found out e.g. via nvcc --version).

\n", "system": "" }, { "instruction": "LSTM Autoencoder", "input": "", "output": "

Models can be any way you want. If I understood it right, you just want to know how to create models with LSTM?

\n\n

Using LSTMs

\n\n

Well, first, you have to define what your encoded vector looks like. Suppose you want it to be an array of 20 elements, a 1-dimension vector. So, shape (None,20). The size of it is up to you, and there is no clear rule to know the ideal one.

\n\n

And your input must be three-dimensional, such as your (1200,10,5). In keras summaries and error messages, it will be shown as (None,10,5), as \"None\" represents the batch size, which can vary each time you train/predict.

\n\n

There are many ways to do this, but, suppose you want only one LSTM layer:

\n\n
from keras.layers import *\nfrom keras.models import Model\n\ninpE = Input((10,5)) #here, you don't define the batch size   \noutE = LSTM(units = 20, return_sequences=False, ...optional parameters...)(inpE)\n
\n\n

This is enough for a very very simple encoder resulting in an array with 20 elements (but you can stack more layers if you want). Let's create the model:

\n\n
encoder = Model(inpE,outE)   \n
\n\n

Now, for the decoder, it gets obscure. You don't have an actual sequence anymore, but a static meaningful vector. You may want to use LTSMs still, they will suppose the vector is a sequence.

\n\n

But here, since the input has shape (None,20), you must first reshape it to some 3-dimensional array in order to attach an LSTM layer next.

\n\n

The way you will reshape it is entirely up to you. 20 steps of 1 element? 1 step of 20 elements? 10 steps of 2 elements? Who knows?

\n\n
inpD = Input((20,))   \noutD = Reshape((10,2))(inpD) #supposing 10 steps of 2 elements    \n
\n\n

It's important to notice that if you don't have 10 steps anymore, you won't be able to just enable \"return_sequences\" and have the output you want. You'll have to work a little. Acually, it's not necessary to use \"return_sequences\" or even to use LSTMs, but you may do that.

\n\n

Since in my reshape I have 10 timesteps (intentionally), it will be ok to use \"return_sequences\", because the result will have 10 timesteps (as the initial input)

\n\n
outD1 = LSTM(5,return_sequences=True,...optional parameters...)(outD)    \n#5 cells because we want a (None,10,5) vector.   \n
\n\n

You could work in many other ways, such as simply creating a 50 cell LSTM without returning sequences and then reshaping the result:

\n\n
alternativeOut = LSTM(50,return_sequences=False,...)(outD)    \nalternativeOut = Reshape((10,5))(alternativeOut)\n
\n\n

And our model goes:

\n\n
decoder = Model(inpD,outD1)  \nalternativeDecoder = Model(inpD,alternativeOut)   \n
\n\n

After that, you unite the models with your code and train the autoencoder. \nAll three models will have the same weights, so you can make the encoder bring results just by using its predict method.

\n\n
encoderPredictions = encoder.predict(data)\n
\n\n
\n\n

What I often see about LSTMs for generating sequences is something like predicting the next element.

\n\n

You take just a few elements of the sequence and try to find the next element. And you take another segment one step forward and so on. This may be helpful in generating sequences.

\n", "system": "" }, { "instruction": "Running Tensorflow in Jupyter Notebook", "input": "", "output": "

I came up with your case. This is how I sort it out

\n
    \n
  1. Install Anaconda
  2. \n
  3. Create a virtual environment - conda create -n tensorflow
  4. \n
  5. Go inside your virtual environment - (on macOS/Linux:) source activate tensorflow (on Windows: activate tensorflow)
  6. \n
  7. Inside that install tensorflow. You can install it using pip
  8. \n
  9. Finish install
  10. \n
\n

So then the next thing, when you launch it:

\n
    \n
  1. If you are not inside the virtual environment type - Source Activate Tensorflow
  2. \n
  3. Then inside this again install your Jupiter notebook and Pandas libraries, because there can be some missing in this virtual environment
  4. \n
\n

Inside the virtual environment just type:

\n
    \n
  1. pip install jupyter notebook
  2. \n
  3. pip install pandas
  4. \n
\n

Then you can launch jupyter notebook saying:

\n
    \n
  1. jupyter notebook
  2. \n
  3. Select the correct terminal python 3 or 2
  4. \n
  5. Then import those modules
  6. \n
\n", "system": "" }, { "instruction": "Update TensorFlow", "input": "", "output": "
(tensorflow)$ pip install --upgrade pip  # for Python 2.7\n(tensorflow)$ pip3 install --upgrade pip # for Python 3.n\n\n(tensorflow)$ pip install --upgrade tensorflow      # for Python 2.7\n(tensorflow)$ pip3 install --upgrade tensorflow     # for Python 3.n\n(tensorflow)$ pip install --upgrade tensorflow-gpu  # for Python 2.7 and GPU\n(tensorflow)$ pip3 install --upgrade tensorflow-gpu # for Python 3.n and GPU\n\n(tensorflow)$ pip install --upgrade tensorflow-gpu==1.4.1 # for a specific version\n
\n\n

Details on install tensorflow.

\n", "system": "" }, { "instruction": "How to convert one-hot encodings into integers?", "input": "", "output": "

You can use numpy.argmax or tf.argmax. Example:

\n\n
import numpy as np  \na  = np.array([[0,1,0,0],[1,0,0,0],[0,0,0,1]])\nprint('np.argmax(a, axis=1): {0}'.format(np.argmax(a, axis=1)))\n
\n\n

output:

\n\n
np.argmax(a, axis=1): [1 0 3]\n
\n\n

You may also want to look at sklearn.preprocessing.LabelBinarizer.inverse_transform.

\n", "system": "" }, { "instruction": "Linear vs nonlinear neural network?", "input": "", "output": "

For starters, a neural network can model any function (not just linear functions) Have a look at this - http://neuralnetworksanddeeplearning.com/chap4.html.

\n\n

A Neural Network has got non linear activation layers which is what gives the Neural Network a non linear element.

\n\n

The function for relating the input and the output is decided by the neural network and the amount of training it gets. If you supply two variables having a linear relationship, then your network will learn this as long as you don't overfit. Similarly, a complex enough neural network can learn any function.

\n", "system": "" }, { "instruction": "How do you read Tensorboard files programmatically?", "input": "", "output": "

You can use TensorBoard's Python classes or script to extract the data:

\n

How can I export data from TensorBoard?

\n
\n

If you'd like to export data to visualize elsewhere (e.g. iPython Notebook), that's possible too. You can directly depend on the underlying classes that TensorBoard uses for loading data: python/summary/event_accumulator.py (for loading data from a single run) or python/summary/event_multiplexer.py (for loading data from multiple runs, and keeping it organized). These classes load groups of event files, discard data that was "orphaned" by TensorFlow crashes, and organize the data by tag.

\n

As another option, there is a script (tensorboard/scripts/serialize_tensorboard.py) which will load a logdir just like TensorBoard does, but write all of the data out to disk as json instead of starting a server. This script is setup to make "fake TensorBoard backends" for testing, so it is a bit rough around the edges.

\n
\n

Using EventAccumulator:

\n
# In [1]: from tensorflow.python.summary import event_accumulator  # deprecated\nIn [1]: from tensorboard.backend.event_processing import event_accumulator\n\nIn [2]: ea = event_accumulator.EventAccumulator('events.out.tfevents.x.ip-x-x-x-x',\n   ...:  size_guidance={ # see below regarding this argument\n   ...:      event_accumulator.COMPRESSED_HISTOGRAMS: 500,\n   ...:      event_accumulator.IMAGES: 4,\n   ...:      event_accumulator.AUDIO: 4,\n   ...:      event_accumulator.SCALARS: 0,\n   ...:      event_accumulator.HISTOGRAMS: 1,\n   ...:  })\n\nIn [3]: ea.Reload() # loads events from file\nOut[3]: <tensorflow.python.summary.event_accumulator.EventAccumulator at 0x7fdbe5ff59e8>\n\nIn [4]: ea.Tags()\nOut[4]: \n{'audio': [],\n 'compressedHistograms': [],\n 'graph': True,\n 'histograms': [],\n 'images': [],\n 'run_metadata': [],\n 'scalars': ['Loss', 'Epsilon', 'Learning_rate']}\n\nIn [5]: ea.Scalars('Loss')\nOut[5]: \n[ScalarEvent(wall_time=1481232633.080754, step=1, value=1.6365480422973633),\n ScalarEvent(wall_time=1481232633.2001867, step=2, value=1.2162202596664429),\n ScalarEvent(wall_time=1481232633.3877788, step=3, value=1.4660096168518066),\n ScalarEvent(wall_time=1481232633.5749283, step=4, value=1.2405034303665161),\n ScalarEvent(wall_time=1481232633.7419815, step=5, value=0.897326648235321),\n ...]\n
\n

size_guidance:

\n
size_guidance: Information on how much data the EventAccumulator should\n  store in memory. The DEFAULT_SIZE_GUIDANCE tries not to store too much\n  so as to avoid OOMing the client. The size_guidance should be a map\n  from a `tagType` string to an integer representing the number of\n  items to keep per tag for items of that `tagType`. If the size is 0,\n  all events are stored.\n
\n", "system": "" }, { "instruction": "How to manually create a tf.Summary()", "input": "", "output": "

You can create a tf.Summary object in your Python program and write it to the same tf.summary.FileWriter object that takes your TensorFlow-produced summaries using the SummaryWriter.add_summary() method.

\n\n

The tf.Summary class is a Python protocol buffer wrapper for the Summary protocol buffer. Each Summary contains a list of tf.Summary.Value protocol buffers, which each have a tag and a either a \"simple\" (floating-point scalar) value, an image, a histogram, or an audio snippet. For example, you can generate a scalar summary from a Python object as follows:

\n\n
writer = tf.train.SummaryWriter(...)\nvalue = 37.0\nsummary = tf.Summary(value=[\n    tf.Summary.Value(tag=\"summary_tag\", simple_value=value), \n])\nwriter.add_summary(summary)\n
\n", "system": "" }, { "instruction": "Compute pairwise distance in a batch without replicating tensor in Tensorflow?", "input": "", "output": "

You can use some linear algebra to turn it into matrix ops. Note that what you need matrix D where a[i] is the ith row of your original matrix and

\n\n
D[i,j] = (a[i]-a[j])(a[i]-a[j])'\n
\n\n

You can rewrite that into

\n\n
D[i,j] = r[i] - 2 a[i]a[j]' + r[j]\n
\n\n

Where r[i] is squared norm of ith row of the original matrix.

\n\n

In a system that supports standard broadcasting rules you can treat r as a column vector and write D as

\n\n
D = r - 2 A A' + r'\n
\n\n

In TensorFlow you could write this as

\n\n
A = tf.constant([[1, 1], [2, 2], [3, 3]])\nr = tf.reduce_sum(A*A, 1)\n\n# turn r into column vector\nr = tf.reshape(r, [-1, 1])\nD = r - 2*tf.matmul(A, tf.transpose(A)) + tf.transpose(r)\nsess = tf.Session()\nsess.run(D)\n
\n\n

result

\n\n
array([[0, 2, 8],\n       [2, 0, 2],\n       [8, 2, 0]], dtype=int32)\n
\n", "system": "" }, { "instruction": "od_graph_def = tf.GraphDef() AttributeError: module 'tensorflow' has no attribute 'GraphDef'", "input": "", "output": "

Yeah, the syntax has changed in T2.0. Here's the correct piece:

\n\n
tf.compat.v1.GraphDef()   # -> instead of tf.GraphDef()\ntf.compat.v2.io.gfile.GFile()   # -> instead of tf.gfile.GFile()\n
\n", "system": "" }, { "instruction": "How do I split Tensorflow datasets?", "input": "", "output": "

You may use Dataset.take() and Dataset.skip():

\n\n
train_size = int(0.7 * DATASET_SIZE)\nval_size = int(0.15 * DATASET_SIZE)\ntest_size = int(0.15 * DATASET_SIZE)\n\nfull_dataset = tf.data.TFRecordDataset(FLAGS.input_file)\nfull_dataset = full_dataset.shuffle()\ntrain_dataset = full_dataset.take(train_size)\ntest_dataset = full_dataset.skip(train_size)\nval_dataset = test_dataset.skip(test_size)\ntest_dataset = test_dataset.take(test_size)\n
\n\n

For more generality, I gave an example using a 70/15/15 train/val/test split but if you don't need a test or a val set, just ignore the last 2 lines.

\n\n

Take:

\n\n
\n

Creates a Dataset with at most count elements from this dataset.

\n
\n\n

Skip:

\n\n
\n

Creates a Dataset that skips count elements from this dataset.

\n
\n\n

You may also want to look into Dataset.shard():

\n\n
\n

Creates a Dataset that includes only 1/num_shards of this dataset.

\n
\n", "system": "" }, { "instruction": "Choosing number of Steps per Epoch", "input": "", "output": "

Based on what you said it sounds like you need a larger batch_size, and of course there are implications with that which could impact the steps_per_epoch and number of epochs.

\n\n

To solve for jumping-around

\n\n\n\n

Implications of a larger batch-size

\n\n\n\n

When to reduce epochs

\n\n\n\n

When to adjust steps-per-epoch

\n\n\n", "system": "" }, { "instruction": "No module named 'absl' error when I import tensorflow", "input": "", "output": "

This was caused by a Python version issue for me. I had the absl package installed on my Python 2.x, but my Python 3.x didn't have it. So I just made sure that both Pythons on my machine had the package installed:

\n\n

pip install absl-py
\npip3 install absl-py

\n", "system": "" }, { "instruction": "How do I specify nvidia runtime from docker-compose.yml?", "input": "", "output": "

Currently (Aug 2018), NVIDIA container runtime for Docker (nvidia-docker2) supports Docker Compose.

\n\n
\n

Yes, use Compose format 2.3 and add runtime: nvidia to your GPU service. Docker Compose must be version 1.19.0 or higher.

\n
\n\n

Example docker-compose.yml:

\n\n
version: '2.3'\n\nservices:\n  nvsmi:\n    image: ubuntu:16.04\n    runtime: nvidia\n    environment:\n      - NVIDIA_VISIBLE_DEVICES=all\n    command: nvidia-smi\n
\n\n

More example from NVIDIA blog uses Docker Compose to show how to launch multiple GPU containers with the NVIDIA Container Runtime.

\n", "system": "" }, { "instruction": "Tensorflow import error: No module named 'tensorflow'", "input": "", "output": "

The reason Python 3.5 environment is unable to import Tensorflow is that Anaconda does not store the tensorflow package in the same environment.

\n\n

One solution is to create a new separate environment in Anaconda dedicated to TensorFlow with its own Spyder

\n\n
conda create -n newenvt anaconda python=3.5\nactivate newenvt\n
\n\n

and then install tensorflow into newenvt

\n\n

I found this primer helpful

\n", "system": "" }, { "instruction": "Tensorflow: loss decreasing, but accuracy stable", "input": "", "output": "

A decrease in binary cross-entropy loss does not imply an increase in accuracy. Consider label 1, predictions 0.2, 0.4 and 0.6 at timesteps 1, 2, 3 and classification threshold 0.5. timesteps 1 and 2 will produce a decrease in loss but no increase in accuracy.

\n\n

Ensure that your model has enough capacity by overfitting the training data. If the model is overfitting the training data, avoid overfitting by using regularization techniques such as dropout, L1 and L2 regularization and data augmentation.

\n\n

Last, confirm your validation data and training data come from the same distribution.

\n", "system": "" }, { "instruction": "looking for source code of from gen_nn_ops in tensorflow", "input": "", "output": "

You can't find this source because the source is automatically generated by bazel. If you build from source, you'll see this file inside bazel-genfiles. It's also present in your local distribution which you can find using inspect module. The file contains automatically generated Python wrappers to underlying C++ implementations, so it basically consists of a bunch of 1-line functions. A shortcut to find underlying C++ implementation of such generated Python op is to convert snake case to camel case, ie conv2d_backprop_input -> Conv2dBackpropInput

\n\n
# figure out where gen_nn_ops is\nprint(tf.nn.conv2d_transpose.__globals__['gen_nn_ops'])\n\nfrom tensorflow.python.ops import gen_nn_ops\nimport inspect\ninspect.getsourcefile('gen_nn_ops.conv2d_backprop_input')\n'/Users/yaroslav/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/gen_nn_ops.py'\n
\n\n

If you cared to find out how this file really came about, you could follow the trail of bazel dependencies in BUILD files. It to find Bazel target that generated it from tensorflow source tree:

\n\n
fullname=$(bazel query tensorflow/python/ops/gen_nn_ops.py)\nbazel query \"attr('srcs', $fullname, ${fullname//:*/}:*)\"\n\n//tensorflow/python:nn_ops_gen\n
\n\n

So now going to BUILD file inside tensorflow/python you see that this is a target of type tf_gen_op_wrapper_private_py which is defined here and calls gen_op_wrapper_py from tensorflow/tensorflow.bzl which looks like this

\n\n
def tf_gen_op_wrapper_py(name, out=None, hidden=None, visibility=None, deps=[],\n....\n      native.cc_binary(\n      name = tool_name,\n
\n\n

This native.cc_binary construct is a way to have Bazel target that represents execution of an arbitrary command. In this case it calls tool_name with some arguments. With a couple more steps you can find that \"tool\" here is compiled from framework/python_op_gen_main.cc

\n\n

The reason for this complication is that TensorFlow was designed to be language agnostic. So in ideal world you would have each op described in ops.pbtxt, and then each op would have one implementation per hardware type using REGISTER_KERNEL_BUILDER, so all implementations would be done in C++/CUDA/Assembly and become automatically available to all language front-ends. There would be an equivalent translator op like \"python_op_gen_main\" for every language and all client library code would be automatically generated. However, because Python is so dominant, there was pressure to add features on the Python side. So now there are two kinds of ops -- pure TensorFlow ops seen in files like gen_nn_ops.py, and Python-only ops in files like nn_ops.py which typically wrap ops automatically generated files gen_nn_ops.py but add extra features/syntax sugar. Also, originally all names were camel-case, but it was decided that public facing release should be PEP compliant with more common Python syntax, so this is a reason for camel-case/snake-case mismatch between C++/Python interfaces of same op

\n", "system": "" }, { "instruction": "Tensorflow variable scope: reuse if variable exists", "input": "", "output": "

A ValueError is raised in get_variable() when creating a new variable and shape is not declared, or when violating reuse during variable creation. Therefore, you can try this:

\n\n
def get_scope_variable(scope_name, var, shape=None):\n    with tf.variable_scope(scope_name) as scope:\n        try:\n            v = tf.get_variable(var, shape)\n        except ValueError:\n            scope.reuse_variables()\n            v = tf.get_variable(var)\n    return v\n\nv1 = get_scope_variable('foo', 'v', [1])\nv2 = get_scope_variable('foo', 'v')\nassert v1 == v2\n
\n\n

Note that the following also works:

\n\n
v1 = get_scope_variable('foo', 'v', [1])\nv2 = get_scope_variable('foo', 'v', [1])\nassert v1 == v2\n
\n\n
\n\n

UPDATE. The new API supports auto-reusing now:

\n\n
def get_scope_variable(scope, var, shape=None):\n    with tf.variable_scope(scope, reuse=tf.AUTO_REUSE):\n        v = tf.get_variable(var, shape)\n    return v\n
\n", "system": "" }, { "instruction": "How does TensorFlow name tensors?", "input": "", "output": "

Your observations on Tensor naming are absolutely correct: the name of a Tensor is the concatenation of

\n\n
    \n
  1. the name of the operation that produced it,
  2. \n
  3. a colon (:), and
  4. \n
  5. the index of that tensor in the outputs of the operation that produced it.
  6. \n
\n\n

Therefore the tensor named \"foo:2\" is the output of the op named \"foo\" at position 2 (with indices starting from zero).

\n\n

The naming of tf.Variable objects is slightly strange. Every tf.Variable contains a mutable tensor object that holds the state of the variable (and a few other tensors). A \"Variable\" op (which has the name \"variable_name\" in your example) \"produces\" this mutable tensor each time it is run as its 0th output, so the name of the mutable tensor is \"variable_name:0\".

\n\n

Since a tf.Variable is mostly indistinguishable from a tf.Tensor—in that it can be used in the same places—we took the decision to make variable names resemble tensor names, so the Variable.name property returns the name of the mutable tensor. (This contrasts with tf.QueueBase and tf.ReaderBase objects, which are not usable directly as tensors (instead you have to call methods on them to create ops that operate on their state), so these do not have a tensor-like name.)

\n", "system": "" }, { "instruction": "Tensorflow Precision / Recall / F1 score and Confusion matrix", "input": "", "output": "

You do not really need sklearn to calculate precision/recall/f1 score. You can easily express them in TF-ish way by looking at the formulas:

\n\n

\"enter

\n\n

Now if you have your actual and predicted values as vectors of 0/1, you can calculate TP, TN, FP, FN using tf.count_nonzero:

\n\n
TP = tf.count_nonzero(predicted * actual)\nTN = tf.count_nonzero((predicted - 1) * (actual - 1))\nFP = tf.count_nonzero(predicted * (actual - 1))\nFN = tf.count_nonzero((predicted - 1) * actual)\n
\n\n

Now your metrics are easy to calculate:

\n\n
precision = TP / (TP + FP)\nrecall = TP / (TP + FN)\nf1 = 2 * precision * recall / (precision + recall)\n
\n", "system": "" }, { "instruction": "Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found", "input": "", "output": "

I think I can help you with providing a cudnn64_8.dll file (this is the download link: https://www.dll-files.com/cudnn64_8.dll.html). When you get the file, you can just put in your bin directory. For example, usually in windows platform, you can put it into C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.3\\bin.

\n", "system": "" }, { "instruction": "TensorFlow: How and why to use SavedModel", "input": "", "output": "

EDIT: I wrote this back at TensorFlow 1.4. As of today (TensorFlow 1.12 is stable, there's a 1.13rc and 2.0 is around the corner) the docs linked in the question are much improved.

\n\n
\n\n

I'm trying to use tf.saved_model and also found the Docs quite (too) abstract. Here's my stab at a full answer to your questions:

\n\n

1. signature_def_map:

\n\n

a. Format See Tom's answer to Tensorflow: how to save/restore a model. (Ctrl-F for \"tf.saved_model\" - currently, the only uses of the phrase on that question are in his answer).

\n\n

b. need It's my understanding that you do normally need it. If you intend to use the model, you need to know the inputs and outputs of the graph. I think it is akin to a C++ function signature: If you intend to define a function after it's called or in another C++ file, you need the signature in your main file (i.e. prototyped or in a header file).

\n\n

2. assets_collection:

\n\n

format: Couldn't find clear documentation, so I went to the builder source code. It appears that the argument is an iterable of Tensors of dtype=tf.string, where each Tensor is a path for the asset directory. So, a TensorFlow Graph collection should work. I guess that is the parameter's namesake, but from the source code I would expect a Python list to work too.

\n\n

(You didn't ask if you need to set it, but judging from Zoe's answer to What are assets in tensorflow? and iga's answer to the tangentially related Tensorflow serving: \u201cNo assets to save/writes\u201d when exporting models, it doesn't usually need set.)

\n\n

3. Tags:

\n\n

a. Why list I don't know why you must pass a list, but you may pass a list with one element. For instance, in my current project I only use the [tf...tag_constants.SERVING] tag.

\n\n

b. When to use multiple Say you're using explicit device placement for operations. Maybe you want to save a CPU version and a GPU version of your graph. Obviously you want to save a serving version of each, and say you want to save training checkpoints. You could use a CPU/GPU tag and a training/serving tag to manage all cases. The docs hint at it:

\n\n
\n

Each MetaGraphDef added to the SavedModel must be annotated with user-specified tags. The tags provide a means to identify the specific MetaGraphDef to load and restore, along with the shared set of variables and assets. These tags typically annotate a MetaGraphDef with its functionality (for example, serving or training), and optionally with hardware-specific aspects (for example, GPU).

\n
\n\n

c. Collision\nToo lazy to force a collision myself - I see two cases that would need addressed - I went to the loader source code. Inside def load, you'll see:

\n\n
saved_model = _parse_saved_model(export_dir)\nfound_match = False\nfor meta_graph_def in saved_model.meta_graphs:\n  if set(meta_graph_def.meta_info_def.tags) == set(tags):\n    meta_graph_def_to_load = meta_graph_def\n    found_match = True\n    break\n\nif not found_match:\n  raise RuntimeError(\n      \"MetaGraphDef associated with tags \" + str(tags).strip(\"[]\") +\n      \" could not be found in SavedModel. To inspect available tag-sets in\"\n      \" the SavedModel, please use the SavedModel CLI: `saved_model_cli`\"\n  )\n
\n\n

It appears to me that it's looking for an exact match. E.g. say you have a metagraph with tags \"GPU\" and \"Serving\" and a metagraph with tag \"Serving\". If you load \"Serving\", you'll get the latter metagraph. On the other hand, say you have a metagraph \"GPU\" and \"Serving\" and a metagraph \"CPU\" and \"Serving\". If you try to load \"Serving\", you'll get the error. If you try to save two metagraphs with the exact same tags in the same folder, I expect you'll overwrite the first one. It doesn't look like the build code handles such a collision in any special way.

\n\n

4. SavedModel or tf.train.Saver:

\n\n

This confused me too. wicke's answer to Should TensorFlow users prefer SavedModel over Checkpoint or GraphDef? cleared it up for me. I'll throw in my two cents:

\n\n

In the scope of local Python+TensorFlow, you can make tf.train.Saver do everything. But, it will cost you. Let me outline the save-a-trained-model-and-deploy use case. You'll need your saver object. It's easiest to set it up to save the complete graph (every variable). You probably don't want to save the .meta all the time since you're working with a static graph. You'll need to specify that in your training hook. You can read about that on cv-tricks. When your training finishes, you'll need convert your checkpoint file to a pb file. That usually means clearing the current graph, restoring the checkpoint, freezing your variables to constants with tf.python.framework.graph_util, and writing it with tf.gfile.GFile. You can read about that on medium. After that, you want to deploy it in Python. You'll need the input and output Tensor names - the string names in the graph def. You can read about that on metaflow (actually a very good blog post for the tf.train.Saver method). Some op nodes will let you feed data into them easily. Some not so much. I usually gave up on finding an appropriate node and added a tf.reshape that didn't actually reshape anything to the graph def. That was my ad-hoc input node. Same for the output. And then finally, you can deploy your model, at least locally in Python.

\n\n

Or, you could use the answer I linked in point 1 to accomplish all this with the SavedModel API. Less headaches thanks to Tom's answer . You'll get more support and features in the future if it ever gets documented appropriately . Looks like it's easier to use command line serving (the medium link covers doing that with Saver - looks tough, good luck!). It's practically baked in to the new Estimators. And according to the Docs,

\n\n
\n

SavedModel is a language-neutral, recoverable, hermetic serialization format.

\n
\n\n

Emphasis mine: Looks like you can get your trained models into the growing C++ API much easier.

\n\n

The way I see it, it's like the Datasets API. It's just easier than the old way!

\n\n

As far as concrete examples of SavedModel of tf.train.Saver: If \"basically, when you want to save or restore your model\" isn't clear enough for you: The correct time to use it is any time it makes your life easier. To me, that looks like always. Especially if you're using Estimators, deploying in C++, or using command line serving.

\n\n

So that's my research on your question. Or four enumerated questions. Err, eight question marks. Hope this helps.

\n", "system": "" }, { "instruction": "What are the differences between all these cross-entropy losses in Keras and TensorFlow?", "input": "", "output": "

There is just one cross (Shannon) entropy defined as:

\n\n
H(P||Q) = - SUM_i P(X=i) log Q(X=i)\n
\n\n

In machine learning usage, P is the actual (ground truth) distribution, and Q is the predicted distribution. All the functions you listed are just helper functions which accepts different ways to represent P and Q.

\n\n

There are basically 3 main things to consider:

\n\n\n\n

Depending on these three aspects, different helper function should be used:

\n\n
                                  outcomes     what is in Q    targets in P   \n-------------------------------------------------------------------------------\nbinary CE                                2      probability         any\ncategorical CE                          >2      probability         soft\nsparse categorical CE                   >2      probability         hard\nsigmoid CE with logits                   2      score               any\nsoftmax CE with logits                  >2      score               soft\nsparse softmax CE with logits           >2      score               hard\n
\n\n

In the end one could just use \"categorical cross entropy\", as this is how it is mathematically defined, however since things like hard targets or binary classification are very popular - modern ML libraries do provide these additional helper functions to make things simpler. In particular \"stacking\" sigmoid and cross entropy might be numerically unstable, but if one knows these two operations are applied together - there is a numerically stable version of them combined (which is implemented in TF).

\n\n

It is important to notice that if you apply wrong helper function the code will usually still execute, but results will be wrong. For example if you apply softmax_* helper for binary classification with one output your network will be considered to always produce \"True\" at the output.

\n\n

As a final note - this answer considers classification, it is slightly different when you consider multi label case (when a single point can have multiple labels), as then Ps do not sum to 1, and one should use sigmoid_cross_entropy_with_logits despite having multiple output units.

\n", "system": "" }, { "instruction": "How to apply Drop Out in Tensorflow to improve the accuracy of neural network?", "input": "", "output": "

In the graph, I'd suggest to move keep_prob = tf.placeholder(tf.float32) outside of the model function to make it global.

\n\n
with graph.as_default():\n    ...\n    x = tf.placeholder(\"float\", [None, n_input])\n    y = tf.placeholder(\"float\", [None, n_classes])\n    keep_prob = tf.placeholder(tf.float32)\n\n    def model(x, weights_hiden, weights_out, biases_hidden, biases_out, keep_prob):\n        # hidden layer with RELU activation\n        layer_1 = tf.nn.relu(tf.add(tf.matmul(x, weights_hiden), biases_hidden))\n        # apply DropOut to hidden layer\n        drop_out = tf.nn.dropout(layer_1, keep_prob)  # DROP-OUT here\n        # output layer with linear activation\n        out_layer = tf.matmul(drop_out, weights_out) + biases_out\n        return out_layer\n    ...\n
\n\n

When running session, feed a desired keep_prob value during training time, and feed 1.0 to keep_prob during reference (validation and/or testing) time.

\n\n
# run the graph\nwith tf.Session(graph=graph) as sess:\n    tf.initialize_all_variables().run()\n    ...\n    for epoch in range(training_epochs):\n        ...\n        for i in range(total_batch):\n            batch_x = ...\n            batch_y = ...\n            # Run optimization op (backprop) and cost op (to get loss value)\n            # Feed a value < 1.0 for keep prob during training\n            _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y, keep_prob : 0.5})\n    ...\n    # Feed 1.0 for keep prob during testing\n    print(\"Test data accuracy:\", accuracy.eval({x: test_dataset, y: test_labels, keep_prob : 1.0}))\n    print(\"Valid data accuracy:\", accuracy.eval({x: valid_dataset, y: valid_labels, keep_prob : 1.0}))\n
\n", "system": "" }, { "instruction": "Tensorflow get all variables in scope", "input": "", "output": "

I think you want tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='my_scope'). This will get all variables in a scope.

\n\n

To pass to an optimizer you do not want all variables you would just want the trainable variables. Those are also kept in a default collection, which is tf.GraphKeys.TRAINABLE_VARIABLES.

\n", "system": "" }, { "instruction": "How to prefetch data using a custom python function in tensorflow", "input": "", "output": "

This is a common use case, and most implementations use TensorFlow's queues to decouple the preprocessing code from the training code. There is a tutorial on how to use queues, but the main steps are as follows:

\n\n
    \n
  1. Define a queue, q, that will buffer the preprocessed data. TensorFlow supports the simple tf.FIFOQueue that produces elements in the order they were enqueued, and the more advanced tf.RandomShuffleQueue that produces elements in a random order. A queue element is a tuple of one or more tensors (which can have different types and shapes). All queues support single-element (enqueue, dequeue) and batch (enqueue_many, dequeue_many) operations, but to use the batch operations you must specify the shapes of each tensor in a queue element when constructing the queue.

  2. \n
  3. Build a subgraph that enqueues preprocessed elements into the queue. One way to do this would be to define some tf.placeholder() ops for tensors corresponding to a single input example, then pass them to q.enqueue(). (If your preprocessing produces a batch at once, you should use q.enqueue_many() instead.) You might also include TensorFlow ops in this subgraph.

  4. \n
  5. Build a subgraph that performs training. This will look like a regular TensorFlow graph, but will get its input by calling q.dequeue_many(BATCH_SIZE).

  6. \n
  7. Start your session.

  8. \n
  9. Create one or more threads that execute your preprocessing logic, then execute the enqueue op, feeding in the preprocessed data. You may find the tf.train.Coordinator and tf.train.QueueRunner utility classes useful for this.

  10. \n
  11. Run your training graph (optimizer, etc.) as normal.

  12. \n
\n\n

EDIT: Here's a simple load_and_enqueue() function and code fragment to get you started:

\n\n
# Features are length-100 vectors of floats\nfeature_input = tf.placeholder(tf.float32, shape=[100])\n# Labels are scalar integers.\nlabel_input = tf.placeholder(tf.int32, shape=[])\n\n# Alternatively, could do:\n# feature_batch_input = tf.placeholder(tf.float32, shape=[None, 100])\n# label_batch_input = tf.placeholder(tf.int32, shape=[None])\n\nq = tf.FIFOQueue(100, [tf.float32, tf.int32], shapes=[[100], []])\nenqueue_op = q.enqueue([feature_input, label_input])\n\n# For batch input, do:\n# enqueue_op = q.enqueue_many([feature_batch_input, label_batch_input])\n\nfeature_batch, label_batch = q.dequeue_many(BATCH_SIZE)\n# Build rest of model taking label_batch, feature_batch as input.\n# [...]\ntrain_op = ...\n\nsess = tf.Session()\n\ndef load_and_enqueue():\n  with open(...) as feature_file, open(...) as label_file:\n    while True:\n      feature_array = numpy.fromfile(feature_file, numpy.float32, 100)\n      if not feature_array:\n        return\n      label_value = numpy.fromfile(feature_file, numpy.int32, 1)[0]\n\n      sess.run(enqueue_op, feed_dict={feature_input: feature_array,\n                                      label_input: label_value})\n\n# Start a thread to enqueue data asynchronously, and hide I/O latency.\nt = threading.Thread(target=load_and_enqueue)\nt.start()\n\nfor _ in range(TRAINING_EPOCHS):\n  sess.run(train_op)\n
\n", "system": "" }, { "instruction": "Logging training and validation loss in tensorboard", "input": "", "output": "

There are several different ways you could achieve this, but you're on the right track with creating different tf.summary.scalar() nodes. Since you must explicitly call SummaryWriter.add_summary() each time you want to log a quantity to the event file, the simplest approach is probably to fetch the appropriate summary node each time you want to get the training or validation accuracy:

\n\n
accuracy = tf.reduce_mean(correct)\n\ntraining_summary = tf.summary.scalar(\"training_accuracy\", accuracy)\nvalidation_summary = tf.summary.scalar(\"validation_accuracy\", accuracy)\n\n\nsummary_writer = tf.summary.FileWriter(...)\n\nfor step in xrange(NUM_STEPS):\n\n  # Perform a training step....\n\n  if step % LOG_PERIOD == 0:\n\n    # To log training accuracy.\n    train_acc, train_summ = sess.run(\n        [accuracy, training_summary], \n        feed_dict={images : training_set.images, labels : training_set.labels})\n    writer.add_summary(train_summ, step) \n\n    # To log validation accuracy.\n    valid_acc, valid_summ = sess.run(\n        [accuracy, validation_summary],\n        feed_dict={images : validation_set.images, labels : validation_set.labels})\n    writer.add_summary(valid_summ, step)\n
\n\n

Alternatively, you could create a single summary op whose tag is a tf.placeholder(tf.string, []) and feed the string \"training_accuracy\" or \"validation_accuracy\" as appropriate.

\n", "system": "" }, { "instruction": "How do I get the current value of a Variable?", "input": "", "output": "

The only way to get the value of the variable is by running it in a session. In the FAQ it is written that:

\n\n
\n

A Tensor object is a symbolic handle to the result of an operation,\n but does not actually hold the values of the operation's output.

\n
\n\n

So TF equivalent would be:

\n\n
import tensorflow as tf\n\nx = tf.Variable([1.0, 2.0])\n\ninit = tf.global_variables_initializer()\n\nwith tf.Session() as sess:\n    sess.run(init)\n    v = sess.run(x)\n    print(v)  # will show you your variable.\n
\n\n

The part with init = global_variables_initializer() is important and should be done in order to initialize variables.

\n\n

Also, take a look at InteractiveSession if you work in IPython.

\n", "system": "" }, { "instruction": "Why do we name variables in Tensorflow?", "input": "", "output": "

The name parameter is optional (you can create variables and constants with or without it), and the variable you use in your program does not depend on it. Names can be helpful in a couple of places:

\n\n

When you want to save or restore your variables (you can save them to a binary file after the computation). From docs:

\n\n
\n

By default, it uses the value of the Variable.name property for each\n variable

\n
\n\n
matrix_1 = tf.Variable([[1, 2], [2, 3]], name=\"v1\")\nmatrix_2 = tf.Variable([[3, 4], [5, 6]], name=\"v2\")\ninit = tf.initialize_all_variables()\n\nsaver = tf.train.Saver()\n\nsess = tf.Session()\nsess.run(init)\nsave_path = saver.save(sess, \"/model.ckpt\")\nsess.close()\n
\n\n

Nonetheless you have variables matrix_1, matrix_2 they are saves as v1, v2 in the file.

\n\n

Also names are used in TensorBoard to nicely show names of edges. You can even group them by using the same scope:

\n\n
import tensorflow as tf\n\nwith tf.name_scope('hidden') as scope:\n  a = tf.constant(5, name='alpha')\n  W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0), name='weights')\n  b = tf.Variable(tf.zeros([1]), name='biases')\n
\n", "system": "" }, { "instruction": "tensorflow:Your input ran out of data", "input": "", "output": "

To make sure that you have "at least steps_per_epoch * epochs batches", set the steps_per_epoch to

\n
steps_per_epoch = len(X_train)//batch_size\n\nvalidation_steps = len(X_test)//batch_size # if you have validation data \n
\n

You can see the maximum number of batches that model.fit() can take by the progress bar when the training interrupts:

\n
5230/10000 [==============>...............] - ETA: 2:05:22 - loss: 0.0570\n
\n

Here, the maximum would be 5230 - 1

\n

Importantly, keep in mind that by default, batch_size is 32 in model.fit().

\n

If you're using a tf.data.Dataset, you can also add the repeat() method, but be careful: it will loop indefinitely (unless you specify a number).

\n", "system": "" }, { "instruction": "Install Tensorflow 2.0 in conda enviroment", "input": "", "output": "

TENSORFLOW 2.0 release version is out!

\n

Since 01/10/2019 I'm not talking beta but the release version.

\n

Using Anaconda

\n

Since 01/11/2019 Anaconda is supporting the Tensorflow 2.0.0.

\n

Option 1: For what the easiest way is just:

\n

conda install tensorflow or conda install tensorflow-gpu

\n

For the gpu mode, anaconda will take care of all the CUDA everything you need to install for the tensorflow gpu mode to work so I strongly recommend using this method.

\n

The only issue with this method is that anaconda might not have the last last version of TensorFlow. For example, at Feb 21 2021, conda has the version 2.3 whereas the PIP version is 2.4. You can check the current version of gpu or cpu.

\n

Option 2 (virtual env): It is strongly recommended to use an environment on where to install tensorflow, for which you need the following command that will create an environment first and then install tensorflow within:

\n\n

Change <your_env_name> by a meaningful name like tf-2

\n

To use tensorflow run first conda activate <your_env_name>

\n

Using pip

\n

Using pip the tensorflow official instructions are quite complete.

\n

Just install tensorflow using pip like:

\n
# Current stable release for CPU-only\npip install tensorflow\n
\n

I yet recommend before doing everything to install tensorflow in a new environment so the 3 steps would be (with anaconda):

\n
conda create --n <our_env_name> pip\nconda activate <your_env_name>\npip install tensorflow\n
\n

Now for the GPU version it's harder with pip, I recommend you this link that explains the extra things you need to install (CUDA and others).

\n", "system": "" }, { "instruction": "What's the purpose of keras.backend.function()", "input": "", "output": "

I have the following understanding of this function keras.backend.function. I will explain it with the help of a code snippet from this.

\n

The part of code snippet is as follows

\n
final_conv_layer = get_output_layer(model, "conv5_3")\nget_output = K.function([model.layers[0].input], \n                        [final_conv_layer.output, model.layers[-1].output])\n[conv_outputs, predictions] = get_output([img])\n    \n
\n

In this code, there is a model from which conv5_3 layer is extracted (line 1). In the function K.function(), the first argument is input to this model and second is set of 2 outputs - one for convolution and second for softmax output at the last layer.

\n

As per the Keras/Tensorflow manual, this function runs the computation graph that we have created in the code, taking input from the first parameter and extracting the number of outputs as per the layers mentioned in the second parameter. Thus, conv_outputs are output of final_conv_layer and predictions are output of model.layers[-1], i.e. the last layer of the model.

\n", "system": "" }, { "instruction": "What is y_true and y_pred when creating a custom metric in Keras?", "input": "", "output": "

y_true and y_pred

\n

The tensor y_true is the true data (or target, ground truth) you pass to the fit method.
\nIt's a conversion of the numpy array y_train into a tensor.

\n

The tensor y_pred is the data predicted (calculated, output) by your model.

\n

Usually, both y_true and y_pred have exactly the same shape. A few of the losses, such as the sparse ones, may accept them with different shapes.

\n
\n

The shape of y_true

\n

It contains an entire batch. Its first dimension is always the batch size, and it must exist, even if the batch has only one element.

\n

Two very easy ways to find the shape of y_true are:

\n\n

But its first dimension will be the batch size.

\n

So, if your last layer outputs (None, 1), the shape of y_true is (batch, 1). If the last layer outputs (None, 200,200, 3), then y_true will be (batch, 200,200,3).

\n
\n

Custom metrics and loss functions

\n

Unfotunately, printing custom metrics will not reveal their content (unless you are using eager mode on, and you have calculated every step of the model with data).
\nYou can see their shapes with print(K.int_shape(y_pred)), for instance.

\n

Remember that these libraries first "compile a graph", then later "runs it with data". When you define your loss, you're in the compile phase, and asking for data needs the model to run.

\n

But even if the result of your metric is multidimensional, keras will automatically find ways to output a single scalar for that metric. (Not sure what is the operation, but very probably a K.mean() hidden under the table - it's interesting to return the entire batch, so Keras applies other operations such as sample weights, for instance).

\n

Sources. After you get used to keras, this understanding gets natural from simply reading this part:

\n
\n

y_true: True labels. Theano/TensorFlow tensor.
\ny_pred: Predictions. Theano/TensorFlow tensor of the same shape as y_true.

\n
\n

True labels mean true/target data. Labels is a badly chosen word here, it is only really "labels" in classification models.
\nPredictions mean the results of your model.

\n", "system": "" }, { "instruction": "How to average summaries over multiple batches?", "input": "", "output": "

Do the averaging of your measure in Python and create a new Summary object for each mean. Here is what I do:

\n\n
accuracies = []\n\n# Calculate your measure over as many batches as you need\nfor batch in validation_set:\n  accuracies.append(sess.run([training_op]))\n\n# Take the mean of you measure\naccuracy = np.mean(accuracies)\n\n# Create a new Summary object with your measure\nsummary = tf.Summary()\nsummary.value.add(tag=\"%sAccuracy\" % prefix, simple_value=accuracy)\n\n# Add it to the Tensorboard summary writer\n# Make sure to specify a step parameter to get nice graphs over time\nsummary_writer.add_summary(summary, global_step)\n
\n", "system": "" }, { "instruction": "IOError: [Errno 28] No space left on device while installing TensorFlow", "input": "", "output": "

Usually, You can set the environment variable 'TMPDIR' to use a different directory other than /tmp or /var/tmp and most programs will honour that.

\n\n

You can perhaps try,

\n\n
\n

$ export TMPDIR=$HOME/tmp

\n
\n\n

and then start your 'pip install'

\n", "system": "" }, { "instruction": "Keras uses way too much GPU memory when calling train_on_batch, fit, etc", "input": "", "output": "

It is a very common mistake to forget that the activations, gradients and optimizer moment tracking variables also take VRRAM, not just the parameters, increasing memory usage quite a bit. The backprob calculations themselves make it so the training phase takes almost double the VRAM of forward / inference use of the neural net, and the Adam optimizer triples the space usage.

\n

So, in the beginning when the network is created, only the parameters are allocated. However, when the training starts. the model actiavtions, backprop computations and the optimizer's tracking variables get allocated, increasing memory use by a large factor.

\n

To allow the training of larger models, people:

\n\n

Tools to train very large models:

\n\n", "system": "" }, { "instruction": "Implementing contrastive loss and triplet loss in Tensorflow", "input": "", "output": "

Update (2018/03/19): I wrote a blog post detailing how to implement triplet loss in TensorFlow.

\n\n
\n\n

You need to implement yourself the contrastive loss or the triplet loss, but once you know the pairs or triplets this is quite easy.

\n\n
\n\n

Contrastive Loss

\n\n

Suppose you have as input the pairs of data and their label (positive or negative, i.e. same class or different class). For instance you have images as input of size 28x28x1:

\n\n
left = tf.placeholder(tf.float32, [None, 28, 28, 1])\nright = tf.placeholder(tf.float32, [None, 28, 28, 1])\nlabel = tf.placeholder(tf.int32, [None, 1]). # 0 if same, 1 if different\nmargin = 0.2\n\nleft_output = model(left)  # shape [None, 128]\nright_output = model(right)  # shape [None, 128]\n\nd = tf.reduce_sum(tf.square(left_output - right_output), 1)\nd_sqrt = tf.sqrt(d)\n\nloss = label * tf.square(tf.maximum(0., margin - d_sqrt)) + (1 - label) * d\n\nloss = 0.5 * tf.reduce_mean(loss)\n
\n\n
\n\n

Triplet Loss

\n\n

Same as with contrastive loss, but with triplets (anchor, positive, negative). You don't need labels here.

\n\n
anchor_output = ...  # shape [None, 128]\npositive_output = ...  # shape [None, 128]\nnegative_output = ...  # shape [None, 128]\n\nd_pos = tf.reduce_sum(tf.square(anchor_output - positive_output), 1)\nd_neg = tf.reduce_sum(tf.square(anchor_output - negative_output), 1)\n\nloss = tf.maximum(0., margin + d_pos - d_neg)\nloss = tf.reduce_mean(loss)\n
\n\n
\n\n

The real trouble when implementing triplet loss or contrastive loss in TensorFlow is how to sample the triplets or pairs. I will focus on generating triplets because it is harder than generating pairs.

\n\n

The easiest way is to generate them outside of the Tensorflow graph, i.e. in python and feed them to the network through the placeholders. Basically you select images 3 at a time, with the first two from the same class and the third from another class. We then perform a feedforward on these triplets, and compute the triplet loss.

\n\n

The issue here is that generating triplets is complicated. We want them to be valid triplets, triplets with a positive loss (otherwise the loss is 0 and the network doesn't learn).
\nTo know whether a triplet is good or not you need to compute its loss, so you already make one feedforward through the network...

\n\n

Clearly, implementing triplet loss in Tensorflow is hard, and there are ways to make it more efficient than sampling in python but explaining them would require a whole blog post !

\n", "system": "" }, { "instruction": "How to understand the term `tensor` in TensorFlow?", "input": "", "output": "

TensorFlow doesn't have first-class Tensor objects, meaning that there are no notion of Tensor in the underlying graph that's executed by the runtime. Instead the graph consists of op nodes connected to each other, representing operations. An operation allocates memory for its outputs, which are available on endpoints :0, :1, etc, and you can think of each of these endpoints as a Tensor. If you have tensor corresponding to nodename:0 you can fetch its value as sess.run(tensor) or sess.run('nodename:0'). Execution granularity happens at operation level, so the run method will execute op which will compute all of the endpoints, not just the :0 endpoint. It's possible to have an Op node with no outputs (like tf.group) in which case there are no tensors associated with it. It is not possible to have tensors without an underlying Op node.

\n\n

You can examine what happens in underlying graph by doing something like this

\n\n
tf.reset_default_graph()\nvalue = tf.constant(1)\nprint(tf.get_default_graph().as_graph_def())\n
\n\n

So with tf.constant you get a single operation node, and you can fetch it using sess.run(\"Const:0\") or sess.run(value)

\n\n

Similarly, value=tf.placeholder(tf.int32) creates a regular node with name Placeholder, and you could feed it as feed_dict={\"Placeholder:0\":2} or feed_dict={value:2}. You can not feed and fetch a placeholder in the same session.run call, but you can see the result by attaching a tf.identity node on top and fetching that.

\n\n

For variable

\n\n
tf.reset_default_graph()\nvalue = tf.Variable(tf.ones_initializer()(()))\nvalue2 = value+3\nprint(tf.get_default_graph().as_graph_def())\n
\n\n

You'll see that it creates two nodes Variable and Variable/read, the :0 endpoint is a valid value to fetch on both of these nodes. However Variable:0 has a special ref type meaning it can be used as an input to mutating operations. The result of Python call tf.Variable is a Python Variable object and there's some Python magic to substitute Variable/read:0 or Variable:0 depending on whether mutation is necessary. Since most ops have only 1 endpoint, :0 is dropped. Another example is Queue -- close() method will create a new Close op node which connects to Queue op. To summarize -- operations on python objects like Variable and Queue map to different underlying TensorFlow op nodes depending on usage.

\n\n

For ops like tf.split or tf.nn.top_k which create nodes with multiple endpoints, Python's session.run call automatically wraps output in tuple or collections.namedtuple of Tensor objects which can be fetched individually.

\n", "system": "" }, { "instruction": "Tensorflow: restoring a graph and model then running evaluation on a single image", "input": "", "output": "

There are two methods to feed a single new image to the cifar10 model. The first method is a cleaner approach but requires modification in the main file, hence will require retraining. The second method is applicable when a user does not want to modify the model files and instead wants to use the existing check-point/meta-graph files.

\n\n

The code for the first approach is as follows:

\n\n
import tensorflow as tf\nimport numpy as np\nimport cv2\n\nsess = tf.Session('', tf.Graph())\nwith sess.graph.as_default():\n    # Read meta graph and checkpoint to restore tf session\n    saver = tf.train.import_meta_graph(\"/tmp/cifar10_train/model.ckpt-200.meta\")\n    saver.restore(sess, \"/tmp/cifar10_train/model.ckpt-200\")\n\n    # Read a single image from a file.\n    img = cv2.imread('tmp.png')\n    img = np.expand_dims(img, axis=0)\n\n    # Start the queue runners. If they are not started the program will hang\n    # see e.g. https://www.tensorflow.org/programmers_guide/reading_data\n    coord = tf.train.Coordinator()\n    threads = []\n    for qr in sess.graph.get_collection(tf.GraphKeys.QUEUE_RUNNERS):\n        threads.extend(qr.create_threads(sess, coord=coord, daemon=True,\n                                         start=True))\n\n    # In the graph created above, feed \"is_training\" and \"imgs\" placeholders.\n    # Feeding them will disconnect the path from queue runners to the graph \n    # and enable a path from the placeholder instead. The \"img\" placeholder will be \n    # fed with the image that was read above.\n    logits = sess.run('softmax_linear/softmax_linear:0', \n                     feed_dict={'is_training:0': False, 'imgs:0': img})\n\n    #Print classifiction results.\n    print(logits) \n
\n\n

The script requires that a user creates two placeholders and a conditional execution statement for it to work.

\n\n

The placeholders and conditional execution statement are added in cifar10_train.py as shown below:

\n\n
def train():   \n\"\"\"Train CIFAR-10 for a number of steps.\"\"\"   \n    with tf.Graph().as_default():\n        global_step = tf.contrib.framework.get_or_create_global_step()\n\n    with tf.device('/cpu:0'):\n        images, labels = cifar10.distorted_inputs()\n\n    is_training = tf.placeholder(dtype=bool,shape=(),name='is_training')\n    imgs = tf.placeholder(tf.float32, (1, 32, 32, 3), name='imgs')\n    images = tf.cond(is_training, lambda:images, lambda:imgs)\n    logits = cifar10.inference(images)\n
\n\n

The inputs in cifar10 model are connected to queue runner object which is a multistage queue that can prefetch data from files in parallel. See a nice animation of queue runner here

\n\n

While queue runners are efficient in prefetching large dataset for training, they are an overkill for inference/testing where only a single file is needed to be classified, also they are a bit more involved to modify/maintain.\nFor that reason, I have added a placeholder \"is_training\", which is set to False while training as shown below:

\n\n
 import numpy as np\n tmp_img = np.ndarray(shape=(1,32,32,3), dtype=float)\n with tf.train.MonitoredTrainingSession(\n     checkpoint_dir=FLAGS.train_dir,\n     hooks=[tf.train.StopAtStepHook(last_step=FLAGS.max_steps),\n            tf.train.NanTensorHook(loss),\n            _LoggerHook()],\n     config=tf.ConfigProto(\n         log_device_placement=FLAGS.log_device_placement)) as mon_sess:\n   while not mon_sess.should_stop():\n     mon_sess.run(train_op, feed_dict={is_training: True, imgs: tmp_img})\n
\n\n

Another placeholder \"imgs\" holds a tensor of shape (1,32,32,3) for the image that will be fed during inference -- the first dimension is the batch size which is one in this case. I have modified cifar model to accept 32x32 images instead of 24x24 as the original cifar10 images are 32x32.

\n\n

Finally, the conditional statement feeds the placeholder or queue runner output to the graph. The \"is_training\" placeholder is set to False during inference and \"img\" placeholder is fed a numpy array -- the numpy array is reshaped from 3 to 4 dimensional vector to conform to the input tensor to inference function in the model.

\n\n

That is all there is to it. Any model can be inferred with a single/user defined test data like shown in the script above. Essentially read the graph, feed data to the graph nodes and run the graph to get the final output.

\n\n

Now the second method. The other approach is to hack cifar10.py and cifar10_eval.py to change batch size to one and replace the data coming from the queue runner with the one read from a file.

\n\n

Set batch size to 1:

\n\n
tf.app.flags.DEFINE_integer('batch_size', 1,\n                             \"\"\"Number of images to process in a batch.\"\"\")\n
\n\n

Call inference with an image file read.

\n\n
def evaluate():   with tf.Graph().as_default() as g:\n    # Get images and labels for CIFAR-10.\n    eval_data = FLAGS.eval_data == 'test'\n    images, labels = cifar10.inputs(eval_data=eval_data)\n    import cv2\n    img = cv2.imread('tmp.png')\n    img = np.expand_dims(img, axis=0)\n    img = tf.cast(img, tf.float32)\n\n    logits = cifar10.inference(img)\n
\n\n

Then pass logits to eval_once and modify eval once to evaluate logits:

\n\n
def eval_once(saver, summary_writer, top_k_op, logits, summary_op): \n    ...\n    while step < num_iter and not coord.should_stop():\n        predictions = sess.run([top_k_op])\n        print(sess.run(logits))\n
\n\n

There is no separate script to run this method of inference, just run cifar10_eval.py which will now read a file from the user defined location with a batch size of one.

\n", "system": "" }, { "instruction": "How to create only one copy of graph in tensorboard events file with custom tf.Estimator?", "input": "", "output": "

You need to use the TensorBoard tool for visualizing the contents of your summary logs.

\n

The event file log can be read and use it.\nYou can see the example from this link provides information about how to read events written to an event file.

\n
# This example supposes that the events file contains summaries with a\n# summary value tag 'loss'.  These could have been added by calling\n# `add_summary()`, passing the output of a scalar summary op created with\n# with: `tf.compat.v1.summary.scalar('loss', loss_tensor)`.\nfor e in tf.compat.v1.train.summary_iterator(path to events file):\n    for v in e.summary.value:\n        if v.tag == 'loss':\n            print(v.simple_value)\n
\n", "system": "" }, { "instruction": "Numpy is installed but still getting error", "input": "", "output": "

Run:

\n
pip3 uninstall numpy\n
\n

Until you receive a message stating no files available with numpy to uninstall and then you can freshly install numpy using

\n
pip install numpy\n
\n

And that will fix the issue.

\n", "system": "" }, { "instruction": "Is there an easy way to get something like Keras model.summary in Tensorflow?", "input": "", "output": "

Looks like you can use Slim

\n\n

Example:

\n\n
import numpy as np\n\nfrom tensorflow.python.layers import base\nimport tensorflow as tf\nimport tensorflow.contrib.slim as slim\n\nx = np.zeros((1,4,4,3))\nx_tf = tf.convert_to_tensor(x, np.float32)\nz_tf = tf.layers.conv2d(x_tf, filters=32, kernel_size=(3,3))\n\ndef model_summary():\n    model_vars = tf.trainable_variables()\n    slim.model_analyzer.analyze_vars(model_vars, print_info=True)\n\nmodel_summary()\n
\n\n

Output:

\n\n
---------\nVariables: name (type shape) [size]\n---------\nconv2d/kernel:0 (float32_ref 3x3x3x32) [864, bytes: 3456]\nconv2d/bias:0 (float32_ref 32) [32, bytes: 128]\nTotal size of variables: 896\nTotal bytes of variables: 3584\n
\n\n

Also here is an example of custom function to print model summary:\nhttps://github.com/NVlabs/stylegan/blob/f3a044621e2ab802d40940c16cc86042ae87e100/dnnlib/tflib/network.py#L507

\n\n

If you already have .pb tensorflow model you can use: inspect_pb.py to print model info or use tensorflow summarize_graph tool with --print_structure flag, also it's nice that it can detect input and output names.

\n", "system": "" }, { "instruction": "Adam optimizer goes haywire after 200k batches, training loss grows", "input": "", "output": "

Yes. This is a known problem of Adam.

\n\n

The equations for Adam are

\n\n
t <- t + 1\nlr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t)\n\nm_t <- beta1 * m_{t-1} + (1 - beta1) * g\nv_t <- beta2 * v_{t-1} + (1 - beta2) * g * g\nvariable <- variable - lr_t * m_t / (sqrt(v_t) + epsilon)\n
\n\n

where m is an exponential moving average of the mean gradient and v is an exponential moving average of the squares of the gradients. The problem is that when you have been training for a long time, and are close to the optimal, then v can become very small. If then all of a sudden the gradients starts increasing again it will be divided by a very small number and explode.

\n\n

By default beta1=0.9 and beta2=0.999. So m changes much more quickly than v. So m can start being big again while v is still small and cannot catch up.

\n\n

To remedy to this problem you can increase epsilon which is 10-8 by default. Thus stopping the problem of dividing almost by 0. \nDepending on your network, a value of epsilon in 0.1, 0.01, or 0.001 might be good.

\n", "system": "" }, { "instruction": "TensorFlow 'module' object has no attribute 'global_variables_initializer'", "input": "", "output": "

In older versions, it was called tf.initialize_all_variables.

\n", "system": "" }, { "instruction": "How can I make tensorflow run on a GPU with capability 2.x?", "input": "", "output": "

Recent GPU versions of tensorflow require compute capability 3.5 or higher (and use cuDNN to access the GPU.

\n\n

cuDNN also requires a GPU of cc3.0 or higher:

\n\n
\n

cuDNN is supported on Windows, Linux and MacOS systems with Pascal, Kepler, Maxwell, Tegra K1 or Tegra X1 GPUs.

\n
\n\n\n\n

Fermi GPUs (cc2.0, cc2.1) are not supported by cuDNN.

\n\n

Older GPUs (e.g. compute capability 1.x) are also not supported by cuDNN.

\n\n

Note that there has never been either a version of cuDNN or any version of TF that officially supported NVIDIA GPUs less than cc3.0. The initial version of cuDNN started out by requiring cc3.0 GPUs, and the initial version of TF started out by requiring cc3.0 GPUs.

\n", "system": "" }, { "instruction": "How do I find the variable names and values that are saved in a checkpoint?", "input": "", "output": "

Example usage:

\n\n
from tensorflow.python.tools.inspect_checkpoint import print_tensors_in_checkpoint_file\nimport os\ncheckpoint_path = os.path.join(model_dir, \"model.ckpt\")\n\n# List ALL tensors example output: v0/Adam (DT_FLOAT) [3,3,1,80]\nprint_tensors_in_checkpoint_file(file_name=checkpoint_path, tensor_name='')\n\n# List contents of v0 tensor.\n# Example output: tensor_name:  v0 [[[[  9.27958265e-02   7.40226209e-02   4.52989563e-02   3.15700471e-02\nprint_tensors_in_checkpoint_file(file_name=checkpoint_path, tensor_name='v0')\n\n# List contents of v1 tensor.\nprint_tensors_in_checkpoint_file(file_name=checkpoint_path, tensor_name='v1')\n
\n\n

Update: all_tensors argument was added to print_tensors_in_checkpoint_file since Tensorflow 0.12.0-rc0 so you may need to add all_tensors=False or all_tensors=True if required.

\n\n

Alternative method:

\n\n
from tensorflow.python import pywrap_tensorflow\nimport os\n\ncheckpoint_path = os.path.join(model_dir, \"model.ckpt\")\nreader = pywrap_tensorflow.NewCheckpointReader(checkpoint_path)\nvar_to_shape_map = reader.get_variable_to_shape_map()\n\nfor key in var_to_shape_map:\n    print(\"tensor_name: \", key)\n    print(reader.get_tensor(key)) # Remove this is you want to print only variable names\n
\n\n

Hope it helps.

\n", "system": "" }, { "instruction": "Getting the current learning rate from a tf.train.AdamOptimizer", "input": "", "output": "

All the optimizers have a private variable that holds the value of a learning rate.

\n\n

In adagrad and gradient descent it is called self._learning_rate. In adam it is self._lr.

\n\n

So you will just need to print sess.run(optimizer._lr) to get this value. Sess.run is needed because they are tensors.

\n", "system": "" }, { "instruction": "TensorFlow: getting variable by name", "input": "", "output": "

The get_variable() function creates a new variable or returns one created earlier by get_variable(). It won't return a variable created using tf.Variable(). Here's a quick example:

\n\n
>>> with tf.variable_scope(\"foo\"):\n...   bar1 = tf.get_variable(\"bar\", (2,3)) # create\n... \n>>> with tf.variable_scope(\"foo\", reuse=True):\n...   bar2 = tf.get_variable(\"bar\")  # reuse\n... \n\n>>> with tf.variable_scope(\"\", reuse=True): # root variable scope\n...   bar3 = tf.get_variable(\"foo/bar\") # reuse (equivalent to the above)\n... \n>>> (bar1 is bar2) and (bar2 is bar3)\nTrue\n
\n\n

If you did not create the variable using tf.get_variable(), you have a couple options. First, you can use tf.global_variables() (as @mrry suggests):

\n\n
>>> bar1 = tf.Variable(0.0, name=\"bar\")\n>>> bar2 = [var for var in tf.global_variables() if var.op.name==\"bar\"][0]\n>>> bar1 is bar2\nTrue\n
\n\n

Or you can use tf.get_collection() like so:

\n\n
>>> bar1 = tf.Variable(0.0, name=\"bar\")\n>>> bar2 = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=\"bar\")[0]\n>>> bar1 is bar2\nTrue\n
\n\n

Edit

\n\n

You can also use get_tensor_by_name():

\n\n
>>> bar1 = tf.Variable(0.0, name=\"bar\")\n>>> graph = tf.get_default_graph()\n>>> bar2 = graph.get_tensor_by_name(\"bar:0\")\n>>> bar1 is bar2\nFalse, bar2 is a Tensor througn convert_to_tensor on bar1. but bar1 equal \nbar2 in value.\n
\n\n

Recall that a tensor is the output of an operation. It has the same name as the operation, plus :0. If the operation has multiple outputs, they have the same name as the operation plus :0, :1, :2, and so on.

\n", "system": "" }, { "instruction": "import input_data MNIST tensorflow not working", "input": "", "output": "

So let's assume that you are in the directory: /somePath/tensorflow/tutorial (and this is your working directory).

\n

All you need to do is to download the input_data.py file and place it like this. Let's assume that the file name you invoke:

\n
import input_data\nmnist = input_data.read_data_sets("MNIST_data/", one_hot=True)\n...\n
\n

is main.py and it is also in the same directory.

\n

Once this is done, you can just start running main.py which will start downloading the files and will put them in the MNIST_data folder (once they are there the script will not be downloading them next time).

\n", "system": "" }, { "instruction": "Error in python after 'import tensorflow': TypeError: __init__() got an unexpected keyword argument 'syntax'", "input": "", "output": "

Several users have reported issues that arise when an older version of protobuf is installed.

\n\n

TensorFlow requires (and uses a copy of) protobuf-3.0.0a4. However it seems to be conflicting with your installed version (perhaps due to how the PYTHONPATH environment variable is configured on your system?). Can you try uninstalling the python-protobuf package if it is installed, and then seeing if TensorFlow starts correctly?

\n\n

Some users have had success when using a virtualenv. For instructions on installing TensorFlow into a virtualenv, see the getting started guide.

\n", "system": "" }, { "instruction": "ValueError: Layer sequential_20 expects 1 inputs, but it received 2 input tensors", "input": "", "output": "

it helped me when I changed:
\nvalidation_data=[X_val, y_val] into validation_data=(X_val, y_val)
\nActually still wonder why?

\n", "system": "" }, { "instruction": "Confusion about keras Model: __call__ vs. call vs. predict methods", "input": "", "output": "

Adding to @Dmitry Kabanov, they are similar yet they aren't exactly the same thing. If you care about performance, need to look in to critical differences between them.

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
model.predictmodel(x)
loops over the data in batches which means means that predict() calls can scale to very large arrays.happens in-memory and doesn't scale
not differentiabledifferentiable
use this if you just need the output valueuse this when you need to retrieve the gradients
Output is NumPy valueOutput is a Tensor
use this if you have batches of data to be predicteduse this for small dataset
relatively slower for small datarelatively faster for small data
\n
\n

Please check more detailed explanation in Keras FAQs

\n", "system": "" }, { "instruction": "Optimizing subgraph of large graph - slower than optimizing subgraph by itself", "input": "", "output": "

I am guessing that this is a generative adversarial network given by the relation between the losses and the parameters. It seems that the first group of parameters are the generative model and the second group make up the detector model.

\n

If my guesses are correct, then that would mean that the second model is using the output of the first model as its input. Admittedly, I am much more informed about PyTorch than TF. There is a comment which I believe is saying that the first model could be included in the second graph. I also think this is true. I would implement something similar to the following. The most important part is just creating a copy of the generated_tensor with no graph:

\n
// An arbitrary label\nlabel = torch.Tensor(1.0)\n\n// Treat GenerativeModel as the model with the first list of Variables/parameters\ngenerated_tensor = GenerativeModel(random_input_tensor)\n// Treat DetectorModel as the model with the second list of Variables/parameters\ndetector_prediction = DetectorModel(generated_tensor)\n\ngenerated_tensor_copy = torch.tensor(generated_tensor, requires_grad=False)\ndetector_prediction_copy = DetectorModel(generated_tensor_copy)\n\n//This is for optimizing the first model, but it has the second model in its graph\n// which is necessary.\nloss1 = loss_func1(detector_prediction, label)\n// This is for optimizing the second model. It will not have the first model in its graph\nloss2 = loss_func2(detector_prediction_copy, label)\n
\n

I hope this is helpful. If anyone knows how to do this in TF, that would probably be very invaluable.

\n", "system": "" }, { "instruction": "What is difference frozen_inference_graph.pb and saved_model.pb?", "input": "", "output": "

frozen_inference_graph.pb, is a frozen graph that cannot be trained anymore, it defines the graphdef and is actually a serialized graph and can be loaded with this code:

\n
def load_graph(frozen_graph_filename):\n    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:\n        graph_def = tf.GraphDef()\n        graph_def.ParseFromString(f.read())\n        return graph_def\ntf.import_graph_def(load_graph("frozen_inference_graph.pb"))\n
\n

the saved model is a model generated by tf.saved_model.builder and is has to be imported into a session, this file contains the full graph with all training weights (just like the frozen graph) but here can be trained upon, and this one is not serialized and needs to be loaded by this snippet. The [] are tagconstants which can be read by the saved_model_cli. This model is also often served to predict on, like google ml engine par example:

\n
with tf.Session() as sess:\n    tf.saved_model.loader.load(sess, [], "foldername to saved_model.pb, only folder")\n
\n

model.ckpt files are checkpoints, generated during training, this is used to resume training or to have a back up when something goes wrong after along training. If you have a saved model and a frozen graph, then you can ignore this.

\n

.pbtxt files are basically the same as previous discussed models, but then human readable, not binary. These can be ignored as well.

\n

To answer your conversion question:\nsaved models can be transformed into a frozen graph and vice versa, although a saved_model extracted from a frozen graph is also no trainable, but the way it is stored is in saved model format. Checkpoints can be read in and loaded into a session, and there you can build a saved model from them.

\n

Hope I helped, any questions, ask away!

\n

ADDITION:

\n

How to freeze a graph, starting from a saved model folder structure.\nThis post is old, so the method I used before might not work anymore, it will most likely still work with Tensorflow 1.+.

\n

Start of by downloading this file from the tensorflow library, and then this code snippit should do the trick:

\n
    import freeze_graph # the file you just downloaded\n    from tensorflow.python.saved_model import tag_constants # might be unnecessary\n\n    freeze_graph.freeze_graph(\n        input_graph=None,\n        input_saver=None,\n        input_binary=None,\n        input_checkpoint=None,\n        output_node_names="dense_output/BiasAdd",\n        restore_op_name=None,\n        filename_tensor_name=None,\n        output_graph=os.path.join(path, "frozen_graph.pb"),\n        clear_devices=None,\n        initializer_nodes=None,\n        input_saved_model_dir=path,\n        saved_model_tags=tag_constants.SERVING\n    )\n
\n

output_node_names = Node name of the final operation, if you end on a dense layer, it will be dense layer_name/BiasAdd

\n

output_graph = output graph name

\n

input_saved_model_dir = root folder of the saved model

\n

saved_model_tags = saved model tags, in your case this can be None, I did however use a tag.

\n

ANOTHER ADDITION:

\n

The code to load models is already provided above. To actually predict you need a session, for a saved model this session is already created, for a frozen model, it's not.

\n

saved model:

\n
with tf.Session() as sess:\n    tf.saved_model.loader.load(sess, [], "foldername to saved_model.pb, only folder")\n    prediction = sess.run(output_tensor, feed_dict={input_tensor: test_images})\n
\n

Frozen model:

\n
tf.import_graph_def(load_graph("frozen_inference_graph.pb"))\nwith tf.Session() as sess:\n    prediction = sess.run(output_tensor, feed_dict={input_tensor: test_images})\n
\n

To further understand what your input and output layers are, you need to check them out with tensorboard, simply add the following line of code into your session:

\n
tf.summary.FileWriter("path/to/folder/to/save/logs", sess.graph)\n
\n

This line will create a log file that you can open with the cli/powershell, to see how to run tensorboard, check out this previously posted question

\n", "system": "" }, { "instruction": "Multivariate LSTM with missing values", "input": "", "output": "

As suggested by Fran\u00e7ois Chollet (creator of Keras) in his book, one way to handle missing values is to replace them with zero:

\n
\n

In general, with neural networks, it\u2019s safe to input missing values as\n0, with the condition that 0 isn\u2019t already a meaningful value. The\nnetwork will learn from exposure to the data that the value 0 means\nmissing data and will start ignoring the value. Note that if you\u2019re\nexpecting missing values in the test data, but the network was trained\non data without any missing values, the network won\u2019t have learned to\nignore missing values! In this situation, you should artificially\ngenerate training samples with missing entries: copy some training\nsamples several times, and drop some of the features that you expect\nare likely to be missing in the test data.

\n
\n

So you can assign zero to NaN elements, considering that zero is not used in your data (you can normalize the data to a range, say [1,2], and then assign zero to NaN elements; or alternatively, you can normalize all the values to be in range [0,1] and then use -1 instead of zero to replace NaN elements.)

\n

Another alternative way is to use a Masking layer in Keras. You give it a mask value, say 0, and it would drop any timestep (i.e. row) where all its features are equal to the mask value. However, all the following layers should support masking and you also need to pre-process your data and assign the mask value to all the features of a timestep which includes one or more NaN features. Example from Keras doc:

\n
\n

Consider a Numpy data array x of shape (samples, timesteps,features),\nto be fed to an LSTM layer. You want to mask timestep #3\nand #5 because you lack data for these timesteps. You can:

\n\n\n
model = Sequential()\nmodel.add(Masking(mask_value=0., input_shape=(timesteps, features)))\nmodel.add(LSTM(32))\n
\n
\n
\n

Update (May 2021): According to an updated suggestion from Fran\u00e7ois Cholle, it might be better to use a more meaningful or informative value (instead of using zero) for masking missing values. This value could be computed (e.g. mean, median, etc.) or predicted from the data itself.

\n", "system": "" }, { "instruction": "Illegal instruction (core dumped) after running import tensorflow", "input": "", "output": "

I would use older version. Looks like your CPU does not support AVX instructions.

\n\n

Quoting from their Release Page

\n\n
Breaking Changes\nPrebuilt binaries are now built against CUDA 9.0 and cuDNN 7.\nPrebuilt binaries will use AVX instructions. This may break TF on older CPUs.\n
\n\n

You have atleast two options:

\n\n
    \n
  1. Use tensorflow 1.5 or older

  2. \n
  3. Build from source

  4. \n
\n\n

Regarding your concern for differences, you will miss out on new features, but most basic features and documentations are not that different.

\n", "system": "" }, { "instruction": "Plot multiple graphs in one plot using Tensorboard", "input": "", "output": "

If you are using the SummaryWriter from tensorboardX or pytorch 1.2, you have a method called add_scalars:

\n\n

Call it like this:

\n\n
my_summary_writer.add_scalars(f'loss/check_info', {\n    'score': score[iteration],\n    'score_nf': score_nf[iteration],\n}, iteration)\n
\n\n

And it will show up like this:

\n\n

\"tensorboard

\n\n
\n\n

Be careful that add_scalars will mess with the organisation of your runs: it will add mutliple entries to this list (and thus create confusion):

\n\n

\"tensorboard

\n\n

I would recommend that instead you just do:

\n\n
my_summary_writer.add_scalar(f'check_info/score',    score[iter],    iter)\nmy_summary_writer.add_scalar(f'check_info/score_nf', score_nf[iter], iter)\n
\n", "system": "" }, { "instruction": "keras tensorboard: plot train and validation scalars in a same figure", "input": "", "output": "

To handle the validation logs with a separate writer, you can write a custom callback that wraps around the original TensorBoard methods.

\n\n\n\n
import os\nimport tensorflow as tf\nfrom keras.callbacks import TensorBoard\n\nclass TrainValTensorBoard(TensorBoard):\n    def __init__(self, log_dir='./logs', **kwargs):\n        # Make the original `TensorBoard` log to a subdirectory 'training'\n        training_log_dir = os.path.join(log_dir, 'training')\n        super(TrainValTensorBoard, self).__init__(training_log_dir, **kwargs)\n\n        # Log the validation metrics to a separate subdirectory\n        self.val_log_dir = os.path.join(log_dir, 'validation')\n\n    def set_model(self, model):\n        # Setup writer for validation metrics\n        self.val_writer = tf.summary.FileWriter(self.val_log_dir)\n        super(TrainValTensorBoard, self).set_model(model)\n\n    def on_epoch_end(self, epoch, logs=None):\n        # Pop the validation logs and handle them separately with\n        # `self.val_writer`. Also rename the keys so that they can\n        # be plotted on the same figure with the training metrics\n        logs = logs or {}\n        val_logs = {k.replace('val_', ''): v for k, v in logs.items() if k.startswith('val_')}\n        for name, value in val_logs.items():\n            summary = tf.Summary()\n            summary_value = summary.value.add()\n            summary_value.simple_value = value.item()\n            summary_value.tag = name\n            self.val_writer.add_summary(summary, epoch)\n        self.val_writer.flush()\n\n        # Pass the remaining logs to `TensorBoard.on_epoch_end`\n        logs = {k: v for k, v in logs.items() if not k.startswith('val_')}\n        super(TrainValTensorBoard, self).on_epoch_end(epoch, logs)\n\n    def on_train_end(self, logs=None):\n        super(TrainValTensorBoard, self).on_train_end(logs)\n        self.val_writer.close()\n
\n\n\n\n

Using the MNIST dataset as an example:

\n\n
from keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.datasets import mnist\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train = x_train.reshape(60000, 784)\nx_test = x_test.reshape(10000, 784)\nx_train = x_train.astype('float32')\nx_test = x_test.astype('float32')\nx_train /= 255\nx_test /= 255\n\nmodel = Sequential()\nmodel.add(Dense(64, activation='relu', input_shape=(784,)))\nmodel.add(Dense(10, activation='softmax'))\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nmodel.fit(x_train, y_train, epochs=10,\n          validation_data=(x_test, y_test),\n          callbacks=[TrainValTensorBoard(write_graph=False)])\n
\n\n

You can then visualize the two curves on a same figure in TensorBoard.

\n\n

\"Screenshot\"

\n\n
\n\n

EDIT: I've modified the class a bit so that it can be used with eager execution.

\n\n

The biggest change is that I use tf.keras in the following code. It seems that the TensorBoard callback in standalone Keras does not support eager mode yet.

\n\n
import os\nimport tensorflow as tf\nfrom tensorflow.keras.callbacks import TensorBoard\nfrom tensorflow.python.eager import context\n\nclass TrainValTensorBoard(TensorBoard):\n    def __init__(self, log_dir='./logs', **kwargs):\n        self.val_log_dir = os.path.join(log_dir, 'validation')\n        training_log_dir = os.path.join(log_dir, 'training')\n        super(TrainValTensorBoard, self).__init__(training_log_dir, **kwargs)\n\n    def set_model(self, model):\n        if context.executing_eagerly():\n            self.val_writer = tf.contrib.summary.create_file_writer(self.val_log_dir)\n        else:\n            self.val_writer = tf.summary.FileWriter(self.val_log_dir)\n        super(TrainValTensorBoard, self).set_model(model)\n\n    def _write_custom_summaries(self, step, logs=None):\n        logs = logs or {}\n        val_logs = {k.replace('val_', ''): v for k, v in logs.items() if 'val_' in k}\n        if context.executing_eagerly():\n            with self.val_writer.as_default(), tf.contrib.summary.always_record_summaries():\n                for name, value in val_logs.items():\n                    tf.contrib.summary.scalar(name, value.item(), step=step)\n        else:\n            for name, value in val_logs.items():\n                summary = tf.Summary()\n                summary_value = summary.value.add()\n                summary_value.simple_value = value.item()\n                summary_value.tag = name\n                self.val_writer.add_summary(summary, step)\n        self.val_writer.flush()\n\n        logs = {k: v for k, v in logs.items() if not 'val_' in k}\n        super(TrainValTensorBoard, self)._write_custom_summaries(step, logs)\n\n    def on_train_end(self, logs=None):\n        super(TrainValTensorBoard, self).on_train_end(logs)\n        self.val_writer.close()\n
\n\n

The idea is the same --

\n\n\n\n

Again, you can use the MNIST data to test it,

\n\n
from tensorflow.keras.datasets import mnist\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.train import AdamOptimizer\n\ntf.enable_eager_execution()\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train = x_train.reshape(60000, 784)\nx_test = x_test.reshape(10000, 784)\nx_train = x_train.astype('float32')\nx_test = x_test.astype('float32')\nx_train /= 255\nx_test /= 255\ny_train = y_train.astype(int)\ny_test = y_test.astype(int)\n\nmodel = Sequential()\nmodel.add(Dense(64, activation='relu', input_shape=(784,)))\nmodel.add(Dense(10, activation='softmax'))\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer=AdamOptimizer(), metrics=['accuracy'])\n\nmodel.fit(x_train, y_train, epochs=10,\n          validation_data=(x_test, y_test),\n          callbacks=[TrainValTensorBoard(write_graph=False)])\n
\n", "system": "" }, { "instruction": "How to set weights in Keras with a numpy array?", "input": "", "output": "

What is keras_layer in your code?

\n\n

You can set weights these ways:

\n\n
model.layers[i].set_weights(listOfNumpyArrays)    \nmodel.get_layer(layerName).set_weights(...)\nmodel.set_weights(listOfNumpyArrays)\n
\n\n

Where model is an instance of an existing model. \nYou can see the expected length of the list and its array shapes using the method get_weights() from the same instances above.

\n", "system": "" }, { "instruction": "Tensorflow: How to convert .meta, .data and .index model files into one graph.pb file", "input": "", "output": "

You can use this simple script to do that. But you must specify the names of the output nodes.

\n\n
import tensorflow as tf\n\nmeta_path = 'model.ckpt-22480.meta' # Your .meta file\noutput_node_names = ['output:0']    # Output nodes\n\nwith tf.Session() as sess:\n    # Restore the graph\n    saver = tf.train.import_meta_graph(meta_path)\n\n    # Load weights\n    saver.restore(sess,tf.train.latest_checkpoint('path/of/your/.meta/file'))\n\n    # Freeze the graph\n    frozen_graph_def = tf.graph_util.convert_variables_to_constants(\n        sess,\n        sess.graph_def,\n        output_node_names)\n\n    # Save the frozen graph\n    with open('output_graph.pb', 'wb') as f:\n      f.write(frozen_graph_def.SerializeToString())\n
\n\n

If you don't know the name of the output node or nodes, there are two ways

\n\n
    \n
  1. You can explore the graph and find the name with Netron or with console summarize_graph utility.

  2. \n
  3. You can use all the nodes as output ones as shown below.

  4. \n
\n\n
output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node]\n
\n\n

(Note that you have to put this line just before convert_variables_to_constants call.)

\n\n

But I think it's unusual situation, because if you don't know the output node, you cannot use the graph actually.

\n", "system": "" }, { "instruction": "Change images slider step in TensorBoard", "input": "", "output": "

I answered this question over there \"TensorBoard doesn't show all data points\", but this seems to be more popular so I will quote it here.

\n\n

You don't have to change the source code for this, there is a flag called --samples_per_plugin.

\n\n

Quoting from the help command

\n\n
\n

--samples_per_plugin: An optional comma separated list of plugin_name=num_samples pairs to explicitly\n specify how many samples to keep per tag for that plugin. For unspecified plugins, TensorBoard\n randomly downsamples logged summaries to reasonable values to prevent out-of-memory errors for long\n running jobs. This flag allows fine control over that downsampling. Note that 0 means keep all\n samples of that type. For instance, \"scalars=500,images=0\" keeps 500 scalars and all images. Most\n users should not need to set this flag.\n (default: '')

\n
\n\n

So if you want to have a slider of 100 images, use:

\n\n

tensorboard --samples_per_plugin images=100

\n", "system": "" }, { "instruction": "TensorBoard doesn't show all data points", "input": "", "output": "

You don't have to change the source code for this, there is a flag called --samples_per_plugin.

\n\n

Quoting from the help command

\n\n
\n

--samples_per_plugin: An optional comma separated list of plugin_name=num_samples pairs to explicitly\n specify how many samples to keep per tag for that plugin. For unspecified plugins, TensorBoard\n randomly downsamples logged summaries to reasonable values to prevent out-of-memory errors for long\n running jobs. This flag allows fine control over that downsampling. Note that 0 means keep all\n samples of that type. For instance, \"scalars=500,images=0\" keeps 500 scalars and all images. Most\n users should not need to set this flag.\n (default: '')

\n
\n\n

So if you want to have a slider of 100 images, use:

\n\n

tensorboard --samples_per_plugin images=100

\n", "system": "" }, { "instruction": "What does tf.gather_nd intuitively do?", "input": "", "output": "

Ok, so think about it like this:

\n\n

You are providing a list of index values to index the provided tensor to get those slices. The first dimension of the indices you provide is for each index you will perform. Let's pretend that tensor is just a list of lists.

\n\n

[[0]] means you want to get one specific slice(list) at index 0 in the provided tensor. Just like this:

\n\n
[tensor[0]]\n
\n\n

[[0], [1]] means you want get two specific slices at indices 0 and 1 like this:

\n\n
[tensor[0], tensor[1]]\n
\n\n

Now what if tensor is more than one dimensions? We do the same thing:

\n\n

[[0, 0]] means you want to get one slice at index [0,0] of the 0-th list. Like this:

\n\n
[tensor[0][0]]\n
\n\n

[[0, 1], [2, 3]] means you want return two slices at the indices and dimensions provided. Like this:

\n\n
[tensor[0][1], tensor[2][3]]\n
\n\n

I hope that makes sense. I tried using Python indexing to help explain how it would look in Python to do this to a list of lists.

\n", "system": "" }, { "instruction": "TensorFlow: questions regarding tf.argmax() and tf.equal()", "input": "", "output": "
tf.argmax(input, axis=None, name=None, dimension=None)\n
\n\n

Returns the index with the largest value across axis of a tensor.

\n\n

input is a Tensor and axis describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.

\n\n

For your specific case let's use two arrays and demonstrate this

\n\n
pred = np.array([[31, 23,  4, 24, 27, 34],\n                [18,  3, 25,  0,  6, 35],\n                [28, 14, 33, 22, 20,  8],\n                [13, 30, 21, 19,  7,  9],\n                [16,  1, 26, 32,  2, 29],\n                [17, 12,  5, 11, 10, 15]])\n\ny = np.array([[31, 23,  4, 24, 27, 34],\n                [18,  3, 25,  0,  6, 35],\n                [28, 14, 33, 22, 20,  8],\n                [13, 30, 21, 19,  7,  9],\n                [16,  1, 26, 32,  2, 29],\n                [17, 12,  5, 11, 10, 15]])\n
\n\n

Evaluating tf.argmax(pred, 1) gives a tensor whose evaluation will give array([5, 5, 2, 1, 3, 0])

\n\n

Evaluating tf.argmax(y, 1) gives a tensor whose evaluation will give array([5, 5, 2, 1, 3, 0])

\n\n
tf.equal(x, y, name=None) takes two tensors(x and y) as inputs and returns the truth value of (x == y) element-wise. \n
\n\n

Following our example, tf.equal(tf.argmax(pred, 1),tf.argmax(y, 1)) returns a tensor whose evaluation will givearray(1,1,1,1,1,1).

\n\n

correct_prediction is a tensor whose evaluation will give a 1-D array of 0's and 1's

\n\n

y_test_prediction can be obtained by executing pred = tf.argmax(logits, 1)

\n\n

The documentation for tf.argmax and tf.equal can be accessed by following the links below.

\n\n

tf.argmax() https://www.tensorflow.org/api_docs/python/math_ops/sequence_comparison_and_indexing#argmax

\n\n

tf.equal() https://www.tensorflow.org/versions/master/api_docs/python/control_flow_ops/comparison_operators#equal

\n", "system": "" }, { "instruction": "Is it possible to make a trainable variable not trainable?", "input": "", "output": "

After looking at the documentation and the code, I was not able to find a way to remove a Variable from the TRAINABLE_VARIABLES.

\n\n

Here is what happens:

\n\n\n\n

First solution

\n\n

When calling the minimize method of the optimizer (see doc.), you can pass a var_list=[...] as argument with the variables you want to optimizer.

\n\n

For instance, if you want to freeze all the layers of VGG except the last two, you can pass the weights of the last two layers in var_list.

\n\n

Second solution

\n\n

You can use a tf.train.Saver() to save variables and restore them later (see this tutorial).

\n\n\n\n

Optionally, you can decide to save only some of the variables in your checkpoint file. See the doc for more info.

\n", "system": "" }, { "instruction": "Does tensorflow use automatic or symbolic gradients?", "input": "", "output": "

TF uses automatic differentiation and more specifically reverse-mode auto differentiation.

\n\n
\n\n

There are 3 popular methods to calculate the derivative:

\n\n
    \n
  1. Numerical differentiation
  2. \n
  3. Symbolic differentiation
  4. \n
  5. Automatic differentiation
  6. \n
\n\n

Numerical differentiation relies on the definition of the derivative: \"enter, where you put a very small h and evaluate function in two places. This is the most basic formula and on practice people use other formulas which give smaller estimation error. This way of calculating a derivative is suitable mostly if you do not know your function and can only sample it. Also it requires a lot of computation for a high-dim function.

\n\n

Symbolic differentiation manipulates mathematical expressions. If you ever used matlab or mathematica, then you saw something like this\n\"enter

\n\n

Here for every math expression they know the derivative and use various rules (product rule, chain rule) to calculate the resulting derivative. Then they simplify the end expression to obtain the resulting expression.

\n\n

Automatic differentiation manipulates blocks of computer programs. A differentiator has the rules for taking the derivative of each element of a program (when you define any op in core TF, you need to register a gradient for this op). It also uses chain rule to break complex expressions into simpler ones. Here is a good example how it works in real TF programs with some explanation.

\n\n
\n\n

You might think that Automatic differentiation is the same as Symbolic differentiation (in one place they operate on math expression, in another on computer programs). And yes, they are sometimes very similar. But for control flow statements (`if, while, loops) the results can be very different:

\n\n
\n

symbolic differentiation leads to inefficient code (unless carefully\n done) and faces the difficulty of converting a computer program into a\n single expression

\n
\n", "system": "" }, { "instruction": "TensorFlow operator overloading", "input": "", "output": "

If at least one of x or y is a tf.Tensor object, the expressions tf.add(x, y) and x + y are equivalent. The main reason you might use tf.add() is to specify an explicit name keyword argument for the created op, which is not possible with the overloaded operator version.

\n\n

Note that if neither x nor y is a tf.Tensor—for example if they are NumPy arrays—then x + y will not create a TensorFlow op. tf.add() always creates a TensorFlow op and converts its arguments to tf.Tensor objects. Therefore, if you are writing a library function that might accept both tensors and NumPy arrays, you might prefer to use tf.add().

\n\n

The following operators are overloaded in the TensorFlow Python API:

\n\n\n\n

Please note, __eq__ ( binary == ) is not overloaded. x == y will simply return a Python boolean whether x and y refer to the same tensor. You need to use tf.equal() explicitly to check for element-wise equality. Same goes for not equal, __ne__ ( binary != ).

\n", "system": "" }, { "instruction": "TensorFlow: Max of a tensor along an axis", "input": "", "output": "

The tf.reduce_max() operator provides exactly this functionality. By default it computes the global maximum of the given tensor, but you can specify a list of reduction_indices, which has the same meaning as axis in NumPy. To complete your example:

\n\n
x = tf.constant([[1, 220, 55], [4, 3, -1]])\nx_max = tf.reduce_max(x, reduction_indices=[1])\nprint sess.run(x_max)  # ==> \"array([220,   4], dtype=int32)\"\n
\n\n

If you compute the argmax using tf.argmax(), you could obtain the the values from a different tensor y by flattening y using tf.reshape(), converting the argmax indices into vector indices as follows, and using tf.gather() to extract the appropriate values:

\n\n
ind_max = tf.argmax(x, dimension=1)\ny = tf.constant([[1, 2, 3], [6, 5, 4]])\n\nflat_y = tf.reshape(y, [-1])  # Reshape to a vector.\n\n# N.B. Handles 2-D case only.\nflat_ind_max = ind_max + tf.cast(tf.range(tf.shape(y)[0]) * tf.shape(y)[1], tf.int64)\n\ny_ = tf.gather(flat_y, flat_ind_max)\n\nprint sess.run(y_) # ==> \"array([2, 6], dtype=int32)\"\n
\n", "system": "" }, { "instruction": "Visualizing output of convolutional layer in tensorflow", "input": "", "output": "

I don't know of a helper function but if you want to see all the filters you can pack them into one image with some fancy uses of tf.transpose.

\n\n

So if you have a tensor that's images x ix x iy x channels

\n\n
>>> V = tf.Variable()\n>>> print V.get_shape()\n\nTensorShape([Dimension(-1), Dimension(256), Dimension(256), Dimension(32)])\n
\n\n

So in this example ix = 256, iy=256, channels=32

\n\n

first slice off 1 image, and remove the image dimension

\n\n
V = tf.slice(V,(0,0,0,0),(1,-1,-1,-1)) #V[0,...]\nV = tf.reshape(V,(iy,ix,channels))\n
\n\n

Next add a couple of pixels of zero padding around the image

\n\n
ix += 4\niy += 4\nV = tf.image.resize_image_with_crop_or_pad(image, iy, ix)\n
\n\n

Then reshape so that instead of 32 channels you have 4x8 channels, lets call them cy=4 and cx=8.

\n\n
V = tf.reshape(V,(iy,ix,cy,cx)) \n
\n\n

Now the tricky part. tf seems to return results in C-order, numpy's default.

\n\n

The current order, if flattened, would list all the channels for the first pixel (iterating over cx and cy), before listing the channels of the second pixel (incrementing ix). Going across the rows of pixels (ix) before incrementing to the next row (iy).

\n\n

We want the order that would lay out the images in a grid.\nSo you go across a row of an image (ix), before stepping along the row of channels (cx), when you hit the end of the row of channels you step to the next row in the image (iy) and when you run out or rows in the image you increment to the next row of channels (cy). so:

\n\n
V = tf.transpose(V,(2,0,3,1)) #cy,iy,cx,ix\n
\n\n

Personally I prefer np.einsum for fancy transposes, for readability, but it's not in tf yet.

\n\n
newtensor = np.einsum('yxYX->YyXx',oldtensor)\n
\n\n

anyway, now that the pixels are in the right order, we can safely flatten it into a 2d tensor:

\n\n
# image_summary needs 4d input\nV = tf.reshape(V,(1,cy*iy,cx*ix,1))\n
\n\n

try tf.image_summary on that, you should get a grid of little images.

\n\n

Below is an image of what one gets after following all the steps here.

\n\n

\"enter

\n", "system": "" }, { "instruction": "Tensorflow: How to replace a node in a calculation graph?", "input": "", "output": "

TL;DR: If you can define the two computations as Python functions, you should do that. If you can't, there's more advanced functionality in TensorFlow to serialize and import graphs, which allows you to compose graphs from different sources.

\n\n

One way to do this in TensorFlow is to build the disjoint computations as separate tf.Graph objects, then convert them to serialized protocol buffers using Graph.as_graph_def():

\n\n
with tf.Graph().as_default() as g_1:\n  input = tf.placeholder(tf.float32, name=\"input\")\n  y = f(input)\n  # NOTE: using identity to get a known name for the output tensor.\n  output = tf.identity(y, name=\"output\")\n\ngdef_1 = g_1.as_graph_def()\n\nwith tf.Graph().as_default() as g_2:  # NOTE: g_2 not g_1       \n  input = tf.placeholder(tf.float32, name=\"input\")\n  z = g(input)\n  output = tf.identity(y, name=\"output\")\n\ngdef_2 = g_2.as_graph_def()\n
\n\n

Then you could compose gdef_1 and gdef_2 into a third graph, using tf.import_graph_def():

\n\n
with tf.Graph().as_default() as g_combined:\n  x = tf.placeholder(tf.float32, name=\"\")\n\n  # Import gdef_1, which performs f(x).\n  # \"input:0\" and \"output:0\" are the names of tensors in gdef_1.\n  y, = tf.import_graph_def(gdef_1, input_map={\"input:0\": x},\n                           return_elements=[\"output:0\"])\n\n  # Import gdef_2, which performs g(y)\n  z, = tf.import_graph_def(gdef_2, input_map={\"input:0\": y},\n                           return_elements=[\"output:0\"]\n
\n", "system": "" }, { "instruction": "pip installation error "No such file or directory: setup.py"", "input": "", "output": "

from https://github.com/tensorflow/tensorflow/issues/56

\n\n
\n

The command to type is \"pip install --upgrade pip\", and this should be\n added to the instructions right after where they tell the user to\n \"source bin/activate\"

\n
\n", "system": "" }, { "instruction": "Keras difference between generator and sequence", "input": "", "output": "

Those methods are roughly the same. It is correct to subclass\nSequence when your dataset doesn't fit in memory. But you shouldn't\nrun any preprocessing in any of the class' methods because that will\nbe reexecuted once per epoch wasting lots of computing resources.

\n\n

It is probably also easier to shuffle the samples rather than their\nindices. Like this:

\n\n

from random import shuffle

\n\n
class DataGen(Sequence):\n    def __init__(self, batch_size, preproc, type, x_set, y_set):\n        self.samples = list(zip(x, y))\n        self.batch_size = batch_size\n        shuffle(self.samples)\n        self.type = type\n        self.preproc = preproc\n\n    def __len__(self):\n        return int(np.ceil(len(self.samples) / self.batch_size))\n\n    def __getitem__(self, i):\n        batch = self.samples[i * self.batch_size:(i + 1) * self.batch_size]\n        return self.preproc.process(*zip(batch))\n\n    def on_epoch_end(self):\n        shuffle(self.samples)\n
\n\n

I think it is impossible to say why you run out of memory without\nknowing more about your data. My guess would be that your preproc\nfunction is doing something wrong. You can debug it by running:

\n\n
for e in DataGen(batch_size, preproc, *train):\n    print(e)\nfor e in DataGen(batch_size, preproc, *dev):\n    print(e)\n
\n\n

You will most likely run out of memory.

\n", "system": "" }, { "instruction": "Unknown initializer: GlorotUniform when loading Keras model", "input": "", "output": "

I ran into the same issue. After changing:

\n\n

from tensorflow import keras

\n\n

to:

\n\n

import keras

\n\n

life is once again worth living.

\n", "system": "" }, { "instruction": "'Tensor' object has no attribute 'lower'", "input": "", "output": "

The tensor must be passed to the layer when you are calling it, and not as an argument. Therefore it must be like this:

\n\n
x = Flatten()(x)  # first the layer is constructed and then it is called on x\n
\n\n

To make it more clear, it is equivalent to this:

\n\n
flatten_layer = Flatten()  # instantiate the layer\nx = flatten_layer(x)       # call it on the given tensor\n
\n", "system": "" }, { "instruction": "Tensorflow Keras Copy Weights From One Model to Another", "input": "", "output": "

Actually what you've done is much more than simply copying weights. You made these two models identical all the time. Every time you update one model - the second one is also updated - as both models have the same weights variables.

\n\n

If you want to just copy weights - the simplest way is by this command:

\n\n
target_model.set_weights(model.get_weights()) \n
\n", "system": "" }, { "instruction": "How to disable dropout while prediction in keras?", "input": "", "output": "

Keras does this by default. In Keras dropout is disabled in test mode. You can look at the code here and see that they use the dropped input in training and the actual input while testing.

\n\n

As far as I know you have to build your own training function from the layers and specify the training flag to predict with dropout (e.g. its not possible to specify a training flag for the predict functions). This is a problem in case you want to do GANs, which use the intermediate output for training and also train the network as a whole, due to a divergence between generated training images and generated test images.

\n", "system": "" }, { "instruction": "Tensorflow : What is the relationship between .ckpt file and .ckpt.meta and .ckpt.index , and .pb file", "input": "", "output": "\n\n

There are a lot of questions here about how to save and restore a graph. See the answer here for instance, but be careful that the two cited tutorials, though really helpful, are far from perfect, and a lot of people still seem to struggle to import a model in c++.

\n\n

EDIT:\nit looks like you can also use the .ckpt files in c++ now, so I guess you don't necessarily need the .pb file any more.

\n", "system": "" }, { "instruction": "What is the difference between Luong attention and Bahdanau attention?", "input": "", "output": "

I went through this Effective Approaches to Attention-based Neural Machine Translation. In the section 3.1 They have mentioned the difference between two attentions as follows,

\n
    \n
  1. Luong attention used top hidden layer states in both of encoder and decoder. But Bahdanau attention take concatenation of forward and backward source hidden state (Top Hidden Layer).

    \n
  2. \n
  3. In Luong attention they get the decoder hidden state at time t. Then calculate attention scores and from that get the context vector which will be concatenated with hidden state of the decoder and then predict.

    \n

    But in the Bahdanau at time t we consider about t-1 hidden state of the decoder. Then we calculate alignment , context vectors as above. But then we concatenate this context with hidden state of the decoder at t-1. So before the softmax this concatenated vector goes inside a GRU.

    \n
  4. \n
  5. Luong has diffferent types of alignments. Bahdanau has only concat score alignment model.

    \n
  6. \n
\n

\"Alignment

\n", "system": "" }, { "instruction": "What's the difference between Tensor and Variable in Tensorflow", "input": "", "output": "\n

Variable is basically a wrapper on Tensor that maintains state across multiple calls to run, and I think makes some things easier with saving and restoring graphs. A Variable needs to be initialized before you can run it. You provide an initial value when you define the Variable, but you have to call its initializer function in order to actually assign this value in your session and then use the Variable. A common way to do this is with tf.global_variables_initalizer().

\n

For example:

\n
import tensorflow as tf\ntest_var = tf.Variable([111, 11, 1])\nsess = tf.Session()\nsess.run(test_var)\n\n# Error!\n\nsess.run(tf.global_variables_initializer())  # initialize variables\nsess.run(test_var)\n# array([111, 11, 1], dtype=int32)\n
\n

As for why you use Variables instead of Tensors, basically a Variable is a Tensor with additional capability and utility. You can specify a Variable as trainable (the default, actually), meaning that your optimizer will adjust it in an effort to minimize your cost function; you can specify where the Variable resides on a distributed system; you can easily save and restore Variables and graphs. Some more information on how to use Variables can be found here.

\n", "system": "" }, { "instruction": "What is the difference between Keras and tf.keras in TensorFlow 1.1+?", "input": "", "output": "

tf.keras (formerly tf.contrib.keras) is an implementation of keras 2 implemented exclusively with/for tensorflow. It is hosted on the tensorflow repo and has a distinct code base than the official repo (the last commit there in the tf-keras branch dates back from May 2017).

\n\n

As a rule of thumb, if your code use any tensorflow-specific code, say anything in tf.data.* for providing inputs or tf.summary.* for visualization in tensorboard, it is simpler to just use tf.keras. (Some may even recommend not using the reference Keras implementation with TF because of occasional problems it has with this toolkit).

\n\n

On the other hand, if you plan to actively maintain a framework-agnostic code, using keras' own package is your only choice.

\n\n

If you don't care much about being framework-agnostic but don't use tensorflow-specific code, I would probably advise to go with tf.keras and start using tensorflow-specific code, esp. tf.data which is a game-changer in my opinion.

\n\n

EDIT

\n\n

I attended a talk by Chollet on TF2 (couldn't find a recording online) in which he basically said that support for frameworks other than TF would eventually drop and future developments of Keras would happen exclusively in tf.keras.

\n\n

From what I can see, this is already happening, as Keras' commit stream is getting thin these days.

\n\n

It makes a lot of sense since, as of now, the only other popular DL framework is pytorch, which is not supported by Keras. Keeping Keras code \"agnostic\" to tensorflow -- the only major framework it is supporting -- makes less and less sense.

\n\n

So today, my answer would be to use tf.keras by default, and keep Keras for legacy projects that would be hard to migrate -- that is the future-proof choice for Keras.

\n", "system": "" }, { "instruction": "TensorFlow wasn't compiled to use SSE (etc.) instructions, but these are available", "input": "", "output": "

Those are warnings (as indicated by the W after the colon. Errors have an E there).

\n

The warnings refer to the fact that your CPU supports SSE Instructions, which allow some fast in-hardware-parallel operations. Enabling these operations is a compile-time operation (i.e. to use SSE you need to build the library from the source enabling the specific SSE version you're targeting), in which case you might take a look at this question.

\n

Note, however, that SSE support influences only the computation speed. Tensorflow will work with or without SSE, but it might take longer for your code to run.\nNote, also, that this influences only the CPU. If you're using the GPU build of Tensorflow, all the operations run on the GPU will not benefit of SSE instructions.

\n", "system": "" }, { "instruction": "Why is my GPU slower than CPU when training LSTM/RNN models?", "input": "", "output": "

If you use Keras, use CuDNNLSTM in place of LSTM or CuDNNGRU in place of GRU. In my case (2 Tesla M60), I am seeing 10x boost of performance. By the way I am using batch size 128 as suggested by @Alexey Golyshev.

\n", "system": "" }, { "instruction": "ValueError: No gradients provided for any variable", "input": "", "output": "

This problem is caused by the following line: tf.nn.softmax_cross_entropy_with_logits(labels=activation, logits=Y)

\n

Based on documentation you should have

\n
\n

labels: Each row labels[i] must be a valid probability distribution.

\n

logits: Unscaled log probabilities.

\n
\n

So logits suppose to be your hypothesis and thus equal to activation and valid probability distribution is Y. So just change it with tf.nn.softmax_cross_entropy_with_logits(labels=Y, logits=activation)

\n", "system": "" }, { "instruction": "How to use tf.while_loop() in tensorflow", "input": "", "output": "

What is stopping you from adding more functionality to the body? You can build whatever complex computational graph you like in the body and take whatever inputs you like from the enclosing graph. Also, outside of the loop, you can then do whatever you want with whatever outputs you return. As you can see from the amount of 'whatevers', TensorFlow's control flow primitives were built with much generality in mind. Below is another 'simple' example, in case it helps.

\n\n
import tensorflow as tf\nimport numpy as np\n\ndef body(x):\n    a = tf.random_uniform(shape=[2, 2], dtype=tf.int32, maxval=100)\n    b = tf.constant(np.array([[1, 2], [3, 4]]), dtype=tf.int32)\n    c = a + b\n    return tf.nn.relu(x + c)\n\ndef condition(x):\n    return tf.reduce_sum(x) < 100\n\nx = tf.Variable(tf.constant(0, shape=[2, 2]))\n\nwith tf.Session():\n    tf.global_variables_initializer().run()\n    result = tf.while_loop(condition, body, [x])\n    print(result.eval())\n
\n", "system": "" }, { "instruction": "Installing tensorflow with anaconda in windows", "input": "", "output": "

Google has recently launched a newer version of TensorFlow r0.12 which include support of Windows both CPU and GPU version can now be installed using Python >=3.5.2 (only 64-bit) version.

\n

For CPU only version open command prompt and enter follow command

\n
pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl\n
\n

Follow this TensorFlow on Windows for step-by-step instructions.

\n

UPDATE

\n

To install current latest version please run following command:

\n
pip install tensorflow #CPU only\npip install tensorflow-gpu #For GPU support\n
\n

UPDATE 2020

\n

TensorFlow 2.0 now has a single package for both CPU and GPU version, simply run

\n
pip install tensorflow\n
\n

If you're using Anaconda you can install TensorFlow GPU version and all of its dependencies (CUDA, cuDNN) by running:

\n
conda install -c tensorflow-gpu\n
\n", "system": "" }, { "instruction": "How to convert tf.int64 to tf.float32?", "input": "", "output": "

You can cast generally using:

\n\n
tf.cast(my_tensor, tf.float32)\n
\n\n

Replace tf.float32 with your desired type.

\n\n
\n\n

Edit: It seems at the moment at least, that tf.cast won't cast to an unsigned dtype (e.g. tf.uint8). To work around this, you can cast to the signed equivalent and used tf.bitcast to get all the way. e.g.

\n\n
tf.bitcast(tf.cast(my_tensor, tf.int8), tf.uint8)\n
\n", "system": "" }, { "instruction": "Tensorflow Dictionary lookup with String tensor", "input": "", "output": "

If you want to run this with new TF 2.x code with eager execution enabled by default. Below is the quick code snippet.

\n\n
import tensorflow as tf\n\n# build a lookup table\ntable = tf.lookup.StaticHashTable(\n    initializer=tf.lookup.KeyValueTensorInitializer(\n        keys=tf.constant([0, 1, 2, 3]),\n        values=tf.constant([10, 11, 12, 13]),\n    ),\n    default_value=tf.constant(-1),\n    name=\"class_weight\"\n)\n\n# now let us do a lookup\ninput_tensor = tf.constant([0, 0, 1, 1, 2, 2, 3, 3])\nout = table.lookup(input_tensor)\nprint(out)\n
\n\n

Output:

\n\n
tf.Tensor([10 10 11 11 12 12 13 13], shape=(8,), dtype=int32)\n
\n", "system": "" }, { "instruction": "InvalidArgumentError: cannot compute MatMul as input #0(zero-based) was expected to be a float tensor but is a double tensor [Op:MatMul]", "input": "", "output": "

Part 1: The problem is indeed the datatype of your input. By default your keras model expects float32 but you are passing a float64. You can either change the dtype of the model or change the input to float32.

\n\n

To change your model:

\n\n
def make_model():\n    net = tf.keras.Sequential()\n    net.add(tf.keras.layers.Dense(4, activation='relu', dtype='float32'))\n    net.add(tf.keras.layers.Dense(4, activation='relu'))\n    net.add(tf.keras.layers.Dense(1))\n    return net\n
\n\n

To change your input:\ny = y.astype('float32')

\n\n

Part 2: You need to call the function that computes your model (i.e. model(data)) under tf.GradientTape(). For example, you can replace your compute_loss method with the following:

\n\n
def compute_loss(model, x, y):\n    pred = model(x)\n    return tf.reduce_mean(tf.square(tf.subtract(pred, y)))\n
\n", "system": "" }, { "instruction": "How can I clear a model created with Keras and Tensorflow(as backend)?", "input": "", "output": "

keras.backend.clear_session() should clear the previous model. From https://keras.io/backend/:

\n\n
\n

Destroys the current TF graph and creates a new one.\n Useful to avoid clutter from old models / layers.

\n
\n", "system": "" }, { "instruction": "keras vs. tensorflow.python.keras - which one to use?", "input": "", "output": "

tensorflow.python.keras is just a bundle of keras with a single backend inside tensorflow package. This allows you to start using keras by installing just pip install tensorflow.

\n\n

keras package contains full keras library with three supported backends: tensorflow, theano and CNTK. If you even wish to switch between backends, you should choose keras package. This approach is also more flexible because it allows to install keras updates independently from tensorflow (which may not be easy to update, for example, because the next version may require a different version of CUDA driver) or vice versa. For this reason, I prefer to install keras as another package.

\n\n

In terms of API, there is no difference right now, but keras will probably be integrated more tightly into tensorflow in the future. So there is a chance there will be tensorflow-only features in keras, but even in this case it's not a blocker to use keras package.

\n\n

UPDATE

\n\n

As of Keras 2.3.0 release, Francois Chollet announced that users should switch towards tf.keras instead of plain Keras. Therefore, the change to tf.keras instead of keras should be made by all users.

\n", "system": "" }, { "instruction": "How to pip install old version of library(tensorflow)?", "input": "", "output": "

This works for me on Mac OS 10.13.1.

\n\n
pip install --user install tensorflow==1.3.0. \n
\n", "system": "" }, { "instruction": "Efficient element-wise multiplication of a matrix and a vector in TensorFlow", "input": "", "output": "

The simplest code to do this relies on the broadcasting behavior of tf.multiply()*, which is based on numpy's broadcasting behavior:

\n\n
x = tf.constant(5.0, shape=[5, 6])\nw = tf.constant([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])\nxw = tf.multiply(x, w)\nmax_in_rows = tf.reduce_max(xw, 1)\n\nsess = tf.Session()\nprint sess.run(xw)\n# ==> [[0.0, 5.0, 10.0, 15.0, 20.0, 25.0],\n#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0],\n#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0],\n#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0],\n#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0]]\n\nprint sess.run(max_in_rows)\n# ==> [25.0, 25.0, 25.0, 25.0, 25.0]\n
\n\n

* In older versions of TensorFlow, tf.multiply() was called tf.mul(). You can also use the * operator (i.e. xw = x * w) to perform the same operation.

\n", "system": "" }, { "instruction": "What is the best way to implement weight constraints in TensorFlow?", "input": "", "output": "

You can take the Lagrangian approach and simply add a penalty for features of the variable you don't want.

\n\n

e.g. To encourage theta to be non-negative, you could add the following to the optimizer's objective function.

\n\n
    added_loss = -tf.minimum( tf.reduce_min(theta),0)\n
\n\n

If any theta are negative, then add2loss will be positive, otherwise zero. Scaling that to a meaningful value is left as an exercise to the reader. Scaling too little will not exert enough pressure. Too much may make things unstable.

\n", "system": "" }, { "instruction": "How can I clear a model created with Keras and Tensorflow(as backend)?", "input": "", "output": "

keras.backend.clear_session() should clear the previous model. From https://keras.io/backend/:

\n\n
\n

Destroys the current TF graph and creates a new one.\n Useful to avoid clutter from old models / layers.

\n
\n", "system": "" }, { "instruction": "Custom TensorFlow Keras optimizer", "input": "", "output": "

Update: TF2.2 forced me to clean up all implementations - so now they can be used as a reference for TF best practices. Also added a section below on _get_hyper vs. _set_hyper.

\n\n
\n\n

I've implemented Keras AdamW in all major TF & Keras versions - I invite you to examine optimizers_v2.py. Several points:

\n\n\n\n
\n\n\n\n

Latter deals with sparse layers - e.g. Embedding - and former with everything else; example.

\n\n\n\n

When defining trainable tf.Variables; example: weights' first and second order moments (e.g. Adam). It uses add_slot().

\n\n
\n\n

_get_hyper vs. _set_hyper: they enable setting and getting Python literals (int, str, etc), callables, and tensors. They exist largely for convenience: anything set via _set_hyper can be retrieved via _get_hyper, avoiding repeating boilerplate code. I dedicated a Q&A to it here.

\n", "system": "" }, { "instruction": "How does the Flatten layer work in Keras?", "input": "", "output": "

The Flatten() operator unrolls the values beginning at the last dimension (at least for Theano, which is \"channels first\", not \"channels last\" like TF. I can't run TensorFlow in my environment). This is equivalent to numpy.reshape with 'C' ordering:

\n\n
\n

\u2018C\u2019 means to read / write the elements using C-like index order, with\n the last axis index changing fastest, back to the first axis index\n changing slowest.

\n
\n\n

Here is a standalone example illustrating Flatten operator with the Keras Functional API. You should be able to easily adapt for your environment.

\n\n
import numpy as np\nfrom keras.layers import Input, Flatten\nfrom keras.models import Model\ninputs = Input(shape=(3,2,4))\n\n# Define a model consisting only of the Flatten operation\nprediction = Flatten()(inputs)\nmodel = Model(inputs=inputs, outputs=prediction)\n\nX = np.arange(0,24).reshape(1,3,2,4)\nprint(X)\n#[[[[ 0  1  2  3]\n#   [ 4  5  6  7]]\n#\n#  [[ 8  9 10 11]\n#   [12 13 14 15]]\n#\n#  [[16 17 18 19]\n#   [20 21 22 23]]]]\nmodel.predict(X)\n#array([[  0.,   1.,   2.,   3.,   4.,   5.,   6.,   7.,   8.,   9.,  10.,\n#         11.,  12.,  13.,  14.,  15.,  16.,  17.,  18.,  19.,  20.,  21.,\n#         22.,  23.]], dtype=float32)\n
\n", "system": "" }, { "instruction": "Can I export a tensorflow summary to CSV?", "input": "", "output": "

While the answer here is as requested within tensorboard it only allows to download a csv for a single run of a single tag.\nIf you have for example 10 tags and 20 runs (what is not at all much) you would need to do the above step 200 times (that alone will probably take you more than a hour).\nIf now you for some reason would like to actually do something with the data for all runs for a single tag you would need to write some weird CSV accumulation script or copy everything by hand (what will probably cost you more than a day).

\n\n

Therefore I would like to add a solution that extracts a CSV file for every tag with all runs contained. Column headers are the run path names and row indices are the run step numbers.

\n\n
import os\nimport numpy as np\nimport pandas as pd\n\nfrom collections import defaultdict\nfrom tensorboard.backend.event_processing.event_accumulator import EventAccumulator\n\n\ndef tabulate_events(dpath):\n    summary_iterators = [EventAccumulator(os.path.join(dpath, dname)).Reload() for dname in os.listdir(dpath)]\n\n    tags = summary_iterators[0].Tags()['scalars']\n\n    for it in summary_iterators:\n        assert it.Tags()['scalars'] == tags\n\n    out = defaultdict(list)\n    steps = []\n\n    for tag in tags:\n        steps = [e.step for e in summary_iterators[0].Scalars(tag)]\n\n        for events in zip(*[acc.Scalars(tag) for acc in summary_iterators]):\n            assert len(set(e.step for e in events)) == 1\n\n            out[tag].append([e.value for e in events])\n\n    return out, steps\n\n\ndef to_csv(dpath):\n    dirs = os.listdir(dpath)\n\n    d, steps = tabulate_events(dpath)\n    tags, values = zip(*d.items())\n    np_values = np.array(values)\n\n    for index, tag in enumerate(tags):\n        df = pd.DataFrame(np_values[index], index=steps, columns=dirs)\n        df.to_csv(get_file_path(dpath, tag))\n\n\ndef get_file_path(dpath, tag):\n    file_name = tag.replace(\"/\", \"_\") + '.csv'\n    folder_path = os.path.join(dpath, 'csv')\n    if not os.path.exists(folder_path):\n        os.makedirs(folder_path)\n    return os.path.join(folder_path, file_name)\n\n\nif __name__ == '__main__':\n    path = \"path_to_your_summaries\"\n    to_csv(path)\n
\n\n

My solution builds upon: https://stackoverflow.com/a/48774926/2230045

\n\n
\n\n

EDIT:

\n\n

I created a more sophisticated version and released it on GitHub: https://github.com/Spenhouet/tensorboard-aggregator

\n\n

This version aggregates multiple tensorboard runs and is able to save the aggregates to a new tensorboard summary or as a .csv file.

\n", "system": "" }, { "instruction": "How to do slice assignment in Tensorflow", "input": "", "output": "

Currently, you can do slice assignment for variables in TensorFlow. There is no specific named function for it, but you can select a slice and call assign on it:

\n\n
my_var = my_var[4:8].assign(tf.zeros(4))\n
\n\n

First, note that (after having looked at the documentation) it seems that the return value of assign, even when applied to a slice, is always a reference to the whole variable after applying the update.

\n\n

EDIT: The information below is either deprecated, imprecise or was always wrong. The fact is that the returned value of assign is a tensor that can be readily used and already incorporates the dependency to the assignment, so simply evaluating that or using it in further operations will ensure it gets executed without need for an explicit tf.control_dependencies block.

\n\n
\n\n

Note, also, that this will only add the assignment op to the graph, but will not run it unless it is explicitly executed or set as a dependency of some other operation. A good practice is to use it in a tf.control_dependencies context:

\n\n
with tf.control_dependencies([my_var[4:8].assign(tf.zeros(4))]):\n    my_var = tf.identity(my_var)\n
\n\n

You can read more about it in TensorFlow issue #4638.

\n", "system": "" }, { "instruction": "Tensorflow: When to use tf.expand_dims?", "input": "", "output": "

expand_dims will not add or reduce elements in a tensor, it just changes the shape by adding 1 to dimensions. For example, a vector with 10 elements could be treated as a 10x1 matrix.

\n\n

The situation I have met to use expand_dims is when I tried to build a ConvNet to classify grayscale images. The grayscale images will be loaded as matrix of size [320, 320]. However, tf.nn.conv2d require input to be [batch, in_height, in_width, in_channels], where the in_channels dimension is missing in my data which in this case should be 1. So I used expand_dims to add one more dimension.

\n\n

In your case, I do not think you need expand_dims.

\n", "system": "" }, { "instruction": "Trouble with TensorFlow in Jupyter Notebook", "input": "", "output": "

Update

\n\n

TensorFlow website supports five installations.

\n\n

To my understanding, using Pip installation directly would be fine to import TensorFlow in Jupyter Notebook (as long as Jupyter Notebook was installed and there were no other issues) b/z it didn't create any virtual environments.

\n\n

Using virtualenv install and conda install would need to install jupyter into the newly created TensorFlow environment to allow TensorFlow to work in Jupyter Notebook (see the following original post section for more details).

\n\n

I believe docker install may require some port setup in the VirtualBox to make TensorFlow work in Jupyter Notebook (see this post).

\n\n

For installing from sources, it also depends on which environment the source code is built and installed into. If it's installed into a freshly created virtual environment or an virtual environment which didn't have Jupyter Notebook installed, it would also need to install Jupyter Notebook into the virtual environment to use Tensorflow in Jupyter Notebook.

\n\n

Original Post

\n\n

To use tensorflow in Ipython and/or Jupyter(Ipython) Notebook, you'll need to install Ipython and Jupyter (after installing tensorflow) under the tensorflow activated environment.

\n\n

Before install Ipython and Jupyter under tensorflow environment, if you do the following commands in terminal:

\n\n
username$ source activate tensorflow\n\n(tensorflow)username$ which ipython\n(tensorflow)username$ /Users/username/anaconda/bin/ipython\n\n(tensorflow)username$ which jupyter\n(tensorflow)username$ /Users/username/anaconda/bin/jupyter\n\n(tensorflow)username$ which python\n(tensorflow)username$ /User/username//anaconda/envs/tensorflow/bin/python\n
\n\n

This is telling you that when you open python from terminal, it is using the one installed in the \"environments\" where tensorflow is installed. Therefore you can actually import tensorflow successfully. However, if you are trying to run ipython and/or jupyter notebook, these are not installed under the \"environments\" equipped with tensorflow, hence it has to go back to use the regular environment which has no tensorflow module, hence you get an import error.

\n\n

You can verify this by listing out the items under envs/tensorflow/bin directory:

\n\n
(tensorflow) username$ ls /User/username/anaconda/envs/tensorflow/bin/\n
\n\n

You will see that there are no \"ipython\" and/or \"jupyer\" listing out.

\n\n

To use tensorflow with Ipython and/or Jupyter notebook, simply install them into the tensorflow environment:

\n\n
(tensorflow) username$ conda install ipython\n(tensorflow) username$ pip install jupyter #(use pip3 for python3)\n
\n\n

After installing them, there should be a \"jupyer\" and a \"ipython\" show up in the envs/tensorflow/bin/ directory.

\n\n

Notes:\nBefore trying to import tensorflow module in jupyter notebook, try close the notebook. And \"source deactivate tensorflow\" first, and then reactivate it (\"source activate tensorflow\") to make sure things are \"on the same page\". Then reopen the notebook and try import tensorflow. It should be import successfully (worked on mine at least).

\n", "system": "" }, { "instruction": "ImportError: cannot import name 'set_random_seed' from 'tensorflow' (C:\\Users\\polon\\Anaconda3\\lib\\site-packages\\tensorflow\\__init__.py)", "input": "", "output": "

In Tensoflow2 there is no need to perform

\n\n
from tensorflow import set_random_seed\n
\n\n

in order to run

\n\n
set_random_seed(x)\n
\n\n

(as it was in older version)

\n\n

Only have to run

\n\n
import tensorflow\ntensorflow.random.set_seed(x)\n
\n\n

Thanks to @David Buck

\n", "system": "" }, { "instruction": "how to get string value out of tf.tensor which dtype is string", "input": "", "output": "

You can use tf.py_func to wrap load_audio_file().

\n\n
import tensorflow as tf\n\ntf.enable_eager_execution()\n\ndef load_audio_file(file_path):\n    # you should decode bytes type to string type\n    print(\"file_path: \",bytes.decode(file_path),type(bytes.decode(file_path)))\n    return file_path\n\ntrain_dataset = tf.data.Dataset.list_files('clean_4s_val/*.wav')\ntrain_dataset = train_dataset.map(lambda x: tf.py_func(load_audio_file, [x], [tf.string]))\n\nfor one_element in train_dataset:\n    print(one_element)\n\nfile_path:  clean_4s_val/1.wav <class 'str'>\n(<tf.Tensor: id=32, shape=(), dtype=string, numpy=b'clean_4s_val/1.wav'>,)\nfile_path:  clean_4s_val/3.wav <class 'str'>\n(<tf.Tensor: id=34, shape=(), dtype=string, numpy=b'clean_4s_val/3.wav'>,)\nfile_path:  clean_4s_val/2.wav <class 'str'>\n(<tf.Tensor: id=36, shape=(), dtype=string, numpy=b'clean_4s_val/2.wav'>,)\n
\n\n

UPDATE for TF 2

\n\n

The above solution will not work with TF 2 (tested with 2.2.0), even when replacing tf.py_func with tf.py_function, giving

\n\n
InvalidArgumentError: TypeError: descriptor 'decode' requires a 'bytes' object but received a 'tensorflow.python.framework.ops.EagerTensor'\n
\n\n

To make it work in TF 2, make the following changes:

\n\n\n", "system": "" }, { "instruction": "tensorflow Mac OS gpu support", "input": "", "output": "

I wrote a little tutorial on compiling TensorFlow 1.2 with GPU support on macOS. I think it's customary to copy relevant parts to SO, so here it goes:

\n\n
    \n
  1. If you haven\u2019t used a TensorFlow-GPU set-up before, I suggest first setting everything up with TensorFlow 1.0 or 1.1, where you can still do pip install tensorflow-gpu. Once you get that working, the CUDA set-up would also work if you\u2019re compiling TensorFlow. If you have an external GPU, YellowPillow's answer (or mine) might help you get things set up.
  2. \n
  3. Follow the official tutorial \u201cInstalling TensorFlow from Sources\u201d, but obviously substitute git checkout r1.0 with git checkout r1.2.\nWhen doing ./configure, pay attention to the Python library path: it sometimes suggests an incorrect one. I chose the default options in most cases, except for: Python library path, CUDA support and compute capacity. Don\u2019t use Clang as the CUDA compiler: this will lead you to an error \u201cInconsistent crosstool configuration; no toolchain corresponding to 'local_darwin' found for cpu 'darwin'.\u201d. Using /usr/bin/gcc as your compiler will actually use Clang that comes with macOS / XCode. Below is my full configuration.
  4. \n
  5. TensorFlow 1.2 expects a C library called OpenMP, which is not available in the current Apple Clang. It should speed up multithreaded TensorFlow on multi-CPU machines, but it will also compile without it. We could try to build TensorFlow with gcc 4 (which I didn\u2019t manage), or simply remove the line that includes OpenMP from the build file. In my case I commented out line 98 of tensorflow/third_party/gpus/cuda/BUILD.tpl, which contained linkopts = [\u201c-lgomp\u201d] (but the location of the line might obviously change). Some people had issues with zmuldefs, but I assume that was with earlier versions; thanks to udnaan for pointing out that it\u2019s OK to comment out these lines.
  6. \n
  7. I had some problems building with the latest bazel 0.5.3, so I reverted to using 0.4.5 that I already had installed. But some discussion in a github issue mentioned bazel 0.5.2 also didn\u2019t have the problem.
  8. \n
  9. Now build with bazel and finish the installation as instructed by the official install guide. On my 3.2 GHz iMac this took about 37 minutes.
  10. \n
\n\n
\n

Using python library path: /Users/m/code/3rd/conda/envs/p3gpu/lib/python3.6/site-packages

\n \n

Do you wish to build TensorFlow with MKL support? [y/N] N

\n \n

No MKL support will be enabled for TensorFlow

\n \n

Please specify optimization flags to use during compilation when bazel option \"--config=opt\" is specified [Default is -march=native]:

\n \n

Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]

\n \n

No Google Cloud Platform support will be enabled for TensorFlow

\n \n

Do you wish to build TensorFlow with Hadoop File System support? [y/N]

\n \n

No Hadoop File System support will be enabled for TensorFlow

\n \n

Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N]

\n \n

No XLA support will be enabled for TensorFlow

\n \n

Do you wish to build TensorFlow with VERBS support? [y/N]

\n \n

No VERBS support will be enabled for TensorFlow

\n \n

Do you wish to build TensorFlow with OpenCL support? [y/N]

\n \n

No OpenCL support will be enabled for TensorFlow

\n \n

Do you wish to build TensorFlow with CUDA support? [y/N] y

\n \n

CUDA support will be enabled for TensorFlow

\n \n

Do you want to use clang as CUDA compiler? [y/N]

\n \n

nvcc will be used as CUDA compiler

\n \n

Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]:

\n \n

Please specify the location where CUDA toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:

\n \n

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:

\n \n

Please specify the cuDNN version you want to use. [Leave empty to use system default]:

\n \n

Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:

\n \n

Please specify a list of comma-separated Cuda compute capabilities you want to build with.

\n \n

You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.

\n \n

Please note that each additional compute capability significantly increases your build time and binary size.

\n \n

[Default is: \"3.5,5.2\"]: 6.1

\n \n

INFO: Starting clean (this may take a while). Consider using --async if the clean takes more than several minutes.

\n \n

Configuration finished

\n
\n", "system": "" }, { "instruction": "How to load only specific weights on Keras", "input": "", "output": "

If your first 9 layers are consistently named between your original trained model and the new model, then you can use model.load_weights() with by_name=True. This will update weights only in the layers of your new model that have an identically named layer found in the original trained model.

\n\n

The name of the layer can be specified with the name keyword, for example:

\n\n
model.add(Dense(8, activation='relu',name='dens_1'))\n
\n", "system": "" }, { "instruction": "How do Monitored Training Sessions work?", "input": "", "output": "

I can't give some insights on how these classes were created, but here are a few things which I think are relevants on how you could use them.

\n\n

The tf.Session is a low level object in the python TensorFlow API while,\nas you said, the tf.train.MonitoredTrainingSession comes with a lot of handy features, especially useful in most of the common cases.

\n\n

Before describing some of the benefits of tf.train.MonitoredTrainingSession, let me answer the question about the graph used by the session. You can specify the tf.Graph used by the MonitoredTrainingSession by using a context manager with your_graph.as_default():

\n\n
from __future__ import print_function\nimport tensorflow as tf\n\ndef example():\n    g1 = tf.Graph()\n    with g1.as_default():\n        # Define operations and tensors in `g`.\n        c1 = tf.constant(42)\n        assert c1.graph is g1\n\n    g2 = tf.Graph()\n    with g2.as_default():\n        # Define operations and tensors in `g`.\n        c2 = tf.constant(3.14)\n        assert c2.graph is g2\n\n    # MonitoredTrainingSession example\n    with g1.as_default():\n        with tf.train.MonitoredTrainingSession() as sess:\n            print(c1.eval(session=sess))\n            # Next line raises\n            # ValueError: Cannot use the given session to evaluate tensor:\n            # the tensor's graph is different from the session's graph.\n            try:\n                print(c2.eval(session=sess))\n            except ValueError as e:\n                print(e)\n\n    # Session example\n    with tf.Session(graph=g2) as sess:\n        print(c2.eval(session=sess))\n        # Next line raises\n        # ValueError: Cannot use the given session to evaluate tensor:\n        # the tensor's graph is different from the session's graph.\n        try:\n            print(c1.eval(session=sess))\n        except ValueError as e:\n            print(e)\n\nif __name__ == '__main__':\n    example()\n
\n\n

So, as you said, the benefits of using MonitoredTrainingSession are that, this object takes care of

\n\n\n\n

but it has also the benefit of making your code easy to distribute as it also works differently depending if you specified the running process as a master or not.

\n\n

For example you could run something like:

\n\n
def run_my_model(train_op, session_args):\n    with tf.train.MonitoredTrainingSession(**session_args) as sess:\n        sess.run(train_op)\n
\n\n

that you would call in a non-distributed way:

\n\n
run_my_model(train_op, {})`\n
\n\n

or in a distributed way (see the distributed doc for more information on the inputs):

\n\n
run_my_model(train_op, {\"master\": server.target,\n                        \"is_chief\": (FLAGS.task_index == 0)})\n
\n\n

On the other hand, the benefit of using the raw tf.Session object is that, you don't have the extra benefits of tf.train.MonitoredTrainingSession, which can be useful if you don't plan to use them or if you want to get more control (for example on how the queues are started).

\n\n

EDIT (as per comment):\nFor the op initialisation, you would have to do something like (cf. official doc:

\n\n
# Define your graph and your ops\ninit_op = tf.global_variables_initializer()\nwith tf.Session() as sess:\n    sess.run(init_p)\n    sess.run(your_graph_ops,...)\n
\n\n

For the QueueRunner, I would refer you to the official doc where you will find more complete examples.

\n\n

EDIT2:

\n\n

The main concept to understand to get a sense on how tf.train.MonitoredTrainingSession works is the _WrappedSession class:

\n\n
\n

This wrapper is used as a base class for various session wrappers\n that provide additional functionality such as monitoring, coordination,\n and recovery.

\n
\n\n

The tf.train.MonitoredTrainingSession works (as of version 1.1) this way:

\n\n\n\n

In conclusion, the tf.train.MonitoredTrainingSession avoids a lot of boiler plate code while being easily extendable with the hooks mechanism.

\n", "system": "" }, { "instruction": "Should TensorFlow users prefer SavedModel over Checkpoint or GraphDef?", "input": "", "output": "

A checkpoint contains the value of (some of the) variables in a TensorFlow model. It is created by a Saver, which is either given specific Variables to save, or by default saves all (non-local) Variables.

\n\n

To use a checkpoint, you need to have a compatible TensorFlow Graph, whose Variables have the same names as the Variables in the checkpoint. (If you don't have a compatible Graph, you can still load the values stored in a checkpoint into selected Variables using the init_from_checkpoint utilities in contrib.)

\n\n

SavedModel is much more comprehensive: It contains a set of Graphs (MetaGraphs, in fact, saving collections and such), as well as a checkpoint which is supposed to be compatible with these Graphs, and any asset files that are needed to run the model (e.g. Vocabulary files). For each MetaGraph it contains, it also stores a set of signatures. Signatures define (named) input and output tensors.

\n\n

This means that given only a SavedModel, you can write tools (such as tensorflow/serving, or the new saved_model command line utility that will appear in tools/ shortly) that interpret or execute the graphs inside. All you have to provide is the data.

\n\n

If in doubt, I would always err on the side of writing a SavedModel, not just a checkpoint. Not only does this allow you to use tensorflow/serving (and other neat utilities that will grow in number), it makes sure that you have all the information necessary to run the model. Nothing is more frustrating than a checkpoint you cannot use any more because you modified your model and now it is incompatible with checkpoint files and all you want to do is run some predictions through it for comparison.

\n", "system": "" }, { "instruction": "Convert TensorFlow string to python string", "input": "", "output": "

In tensorflow 2.0.0, it can be done in the following way:

\n\n
import tensorflow as tf\n\nmy_str = tf.constant('Hello World')\nmy_str_npy = my_str.numpy()\n\nprint(my_str_npy)\ntype(my_str_npy)\n
\n\n

This converts a string tensor into a string of 'bytes' class

\n", "system": "" }, { "instruction": "'module' object has no attribute 'SummaryWriter'", "input": "", "output": "

tf.train.SummaryWriter is deprecated, instead use tf.summary.FileWriter.

\n\n

\u21b3 Adding Summaries to Event Files

\n\n
\n

It will be removed after 2016-11-30.\n Instructions for updating: Please switch to tf.summary.FileWriter. \n The interface and behavior is the same; this is just a rename.

\n
\n\n

<TF Official Migration Page> \u2733\ufe0e includes all current deprecated/renamed functions \u2733\ufe0e

\n", "system": "" }, { "instruction": "Understanding tf.extract_image_patches for extracting patches from an image", "input": "", "output": "

Here is how the method works:

\n\n

Here is some sample code with output to help demonstrate how it works:

\n
import tensorflow as tf\n\nn = 10\n# images is a 1 x 10 x 10 x 1 array that contains the numbers 1 through 100 in order\nimages = [[[[x * n + y + 1] for y in range(n)] for x in range(n)]]\n\n# We generate four outputs as follows:\n# 1. 3x3 patches with stride length 5\n# 2. Same as above, but the rate is increased to 2\n# 3. 4x4 patches with stride length 7; only one patch should be generated\n# 4. Same as above, but with padding set to 'SAME'\nwith tf.Session() as sess:\n  print tf.extract_image_patches(images=images, ksizes=[1, 3, 3, 1], strides=[1, 5, 5, 1], rates=[1, 1, 1, 1], padding='VALID').eval(), '\\n\\n'\n  print tf.extract_image_patches(images=images, ksizes=[1, 3, 3, 1], strides=[1, 5, 5, 1], rates=[1, 2, 2, 1], padding='VALID').eval(), '\\n\\n'\n  print tf.extract_image_patches(images=images, ksizes=[1, 4, 4, 1], strides=[1, 7, 7, 1], rates=[1, 1, 1, 1], padding='VALID').eval(), '\\n\\n'\n  print tf.extract_image_patches(images=images, ksizes=[1, 4, 4, 1], strides=[1, 7, 7, 1], rates=[1, 1, 1, 1], padding='SAME').eval()\n
\n

Output:

\n
[[[[ 1  2  3 11 12 13 21 22 23]\n   [ 6  7  8 16 17 18 26 27 28]]\n\n  [[51 52 53 61 62 63 71 72 73]\n   [56 57 58 66 67 68 76 77 78]]]]\n\n\n[[[[  1   3   5  21  23  25  41  43  45]\n   [  6   8  10  26  28  30  46  48  50]]\n\n  [[ 51  53  55  71  73  75  91  93  95]\n   [ 56  58  60  76  78  80  96  98 100]]]]\n\n\n[[[[ 1  2  3  4 11 12 13 14 21 22 23 24 31 32 33 34]]]]\n\n\n[[[[  1   2   3   4  11  12  13  14  21  22  23  24  31  32  33  34]\n   [  8   9  10   0  18  19  20   0  28  29  30   0  38  39  40   0]]\n\n  [[ 71  72  73  74  81  82  83  84  91  92  93  94   0   0   0   0]\n   [ 78  79  80   0  88  89  90   0  98  99 100   0   0   0   0   0]]]]\n
\n

So, for example, our first result looks like the following:

\n
 *  *  *  4  5  *  *  *  9 10 \n *  *  * 14 15  *  *  * 19 20 \n *  *  * 24 25  *  *  * 29 30 \n31 32 33 34 35 36 37 38 39 40 \n41 42 43 44 45 46 47 48 49 50 \n *  *  * 54 55  *  *  * 59 60 \n *  *  * 64 65  *  *  * 69 70 \n *  *  * 74 75  *  *  * 79 80 \n81 82 83 84 85 86 87 88 89 90 \n91 92 93 94 95 96 97 98 99 100 \n
\n

As you can see, we have 2 rows and 2 columns worth of patches, which are what out_rows and out_cols are.

\n", "system": "" }, { "instruction": "How do you get the name of the tensorflow output nodes in a Keras Model?", "input": "", "output": "

You can use Keras model.summary() to get the name of the last layer.

\n\n

If model.outputs is not empty you can get the node names via:

\n\n
[node.op.name for node in model.outputs]\n
\n\n

you get the session via

\n\n
session = keras.backend.get_session()\n
\n\n

and you convert all training variables to consts via

\n\n
min_graph = convert_variables_to_constants(session, session.graph_def, [node.op.name for node in model.outputs])\n
\n\n

after that you can write a protobuf-file via

\n\n
tensorflow.train.write_graph(min_graph, \"/logdir/\", \"file.pb\", as_text=True)\n
\n", "system": "" }, { "instruction": "In TensorFlow, how can I get nonzero values and their indices from a tensor with python?", "input": "", "output": "

You can achieve same result in Tensorflow using not_equal and where methods.

\n\n
zero = tf.constant(0, dtype=tf.float32)\nwhere = tf.not_equal(A, zero)\n
\n\n

where is a tensor of the same shape as A holding True or False, in the following case

\n\n
[[True, False],\n [False, True]]\n
\n\n

This would be sufficient to select zero or non-zero elements from A. If you want to obtain indices you can use wheremethod as follows:

\n\n
indices = tf.where(where)\n
\n\n

where tensor has two True values so indices tensor will have two entries. where tensor has rank of two, so entries will have two indices:

\n\n
[[0, 0],\n [1, 1]]\n
\n", "system": "" }, { "instruction": "TensorFlow - Read all examples from a TFRecords at once?", "input": "", "output": "

Just for clarity, I have a few thousand images in a single .tfrecords file, they're 720 by 720 rgb png files. The labels are one of 0,1,2,3.

\n\n

I also tried using the parse_example and couldn't make it work but this solution works with the parse_single_example.

\n\n

The downside is that right now I have to know how many items are in each .tf record, which is kind of a bummer. If I find a better way, I'll update the answer. Also, be careful going out of bounds of the number of records in the .tfrecords file, it will start over at the first record if you loop past the last record

\n\n

The trick was to have the queue runner use a coordinator.

\n\n

I left some code in here to save the images as they're being read in so that you can verify the image is correct.

\n\n
from PIL import Image\nimport numpy as np\nimport tensorflow as tf\n\ndef read_and_decode(filename_queue):\n reader = tf.TFRecordReader()\n _, serialized_example = reader.read(filename_queue)\n features = tf.parse_single_example(\n  serialized_example,\n  # Defaults are not specified since both keys are required.\n  features={\n      'image_raw': tf.FixedLenFeature([], tf.string),\n      'label': tf.FixedLenFeature([], tf.int64),\n      'height': tf.FixedLenFeature([], tf.int64),\n      'width': tf.FixedLenFeature([], tf.int64),\n      'depth': tf.FixedLenFeature([], tf.int64)\n  })\n image = tf.decode_raw(features['image_raw'], tf.uint8)\n label = tf.cast(features['label'], tf.int32)\n height = tf.cast(features['height'], tf.int32)\n width = tf.cast(features['width'], tf.int32)\n depth = tf.cast(features['depth'], tf.int32)\n return image, label, height, width, depth\n\n\ndef get_all_records(FILE):\n with tf.Session() as sess:\n   filename_queue = tf.train.string_input_producer([ FILE ])\n   image, label, height, width, depth = read_and_decode(filename_queue)\n   image = tf.reshape(image, tf.pack([height, width, 3]))\n   image.set_shape([720,720,3])\n   init_op = tf.initialize_all_variables()\n   sess.run(init_op)\n   coord = tf.train.Coordinator()\n   threads = tf.train.start_queue_runners(coord=coord)\n   for i in range(2053):\n     example, l = sess.run([image, label])\n     img = Image.fromarray(example, 'RGB')\n     img.save( \"output/\" + str(i) + '-train.png')\n\n     print (example,l)\n   coord.request_stop()\n   coord.join(threads)\n\nget_all_records('/path/to/train-0.tfrecords')\n
\n", "system": "" }, { "instruction": "Working with multiple graphs in TensorFlow", "input": "", "output": "

Your product is a global variable, and you've set it to point to \"g2/MatMul\".

\n\n

In particular

\n\n

Try

\n\n
print product\n
\n\n

and you'll see

\n\n
Tensor(\"g2/MatMul:0\", shape=(1, 1), dtype=float32)\n
\n\n

So the system takes \"g2/MatMul:0\" since that's the Tensor's name, and tries to find it in the graph g1 since that's the graph you set for the session. Incidentally you can see all nodes in the graph print [n.name for n in g1.as_graph_def().node]

\n\n

Generally, using more than one graph is rarely useful. You can't merge them and can't pass tensors between them. I'd recommend just doing

\n\n
tf.reset_default_graph()\na = tf.Constant(2)\nsess = tf.InteractiveSession()\n....\n
\n\n

This way you'll have one default graph and one default session and you can omit specifying graph or session in most cases. If you ever need to refer to them explicitly, you can get them from tf.get_default_graph() or tf.get_default_session()

\n", "system": "" }, { "instruction": "Clarification on tf.Tensor.set_shape()", "input": "", "output": "

As far as I know (and I wrote that code), there isn't a bug in Tensor.set_shape(). I think the misunderstanding stems from the confusing name of that method.

\n\n

To elaborate on the FAQ entry you quoted, Tensor.set_shape() is a pure-Python function that improves the shape information for a given tf.Tensor object. By \"improves\", I mean \"makes more specific\".

\n\n

Therefore, when you have a Tensor object t with shape (?,), that is a one-dimensional tensor of unknown length. You can call t.set_shape((1028178,)), and then t will have shape (1028178,) when you call t.get_shape(). This doesn't affect the underlying storage, or indeed anything on the backend: it merely means that subsequent shape inference using t can rely on the assertion that it is a vector of length 1028178.

\n\n

If t has shape (?,), a call to t.set_shape((478, 717, 3)) will fail, because TensorFlow already knows that t is a vector, so it cannot have shape (478, 717, 3). If you want to make a new Tensor with that shape from the contents of t, you can use reshaped_t = tf.reshape(t, (478, 717, 3)). This creates a new tf.Tensor object in Python; the actual implementation of tf.reshape() does this using a shallow copy of the tensor buffer, so it is inexpensive in practice.

\n\n

One analogy is that Tensor.set_shape() is like a run-time cast in an object-oriented language like Java. For example, if you have a pointer to an Object but know that, in fact, it is a String, you might do the cast (String) obj in order to pass obj to a method that expects a String argument. However, if you have a String s and try to cast it to a java.util.Vector, the compiler will give you an error, because these two types are unrelated.

\n", "system": "" }, { "instruction": "Multilabel Text Classification using TensorFlow", "input": "", "output": "

Change relu to sigmoid of output layer.\nModify cross entropy loss to explicit mathematical formula of sigmoid cross entropy loss (explicit loss was working in my case/version of tensorflow )

\n\n
import tensorflow as tf\n\n# hidden Layer\nclass HiddenLayer(object):\n    def __init__(self, input, n_in, n_out):\n        self.input = input\n\n        w_h = tf.Variable(tf.random_normal([n_in, n_out],mean = 0.0,stddev = 0.05))\n        b_h = tf.Variable(tf.zeros([n_out]))\n\n        self.w = w_h\n        self.b = b_h\n        self.params = [self.w, self.b]\n\n    def output(self):\n        linarg = tf.matmul(self.input, self.w) + self.b\n        self.output = tf.nn.relu(linarg)\n\n        return self.output\n\n# output Layer\nclass OutputLayer(object):\n    def __init__(self, input, n_in, n_out):\n        self.input = input\n\n        w_o = tf.Variable(tf.random_normal([n_in, n_out], mean = 0.0, stddev = 0.05))\n        b_o = tf.Variable(tf.zeros([n_out]))\n\n        self.w = w_o\n        self.b = b_o\n        self.params = [self.w, self.b]\n\n    def output(self):\n        linarg = tf.matmul(self.input, self.w) + self.b\n        #changed relu to sigmoid\n        self.output = tf.nn.sigmoid(linarg)\n\n        return self.output\n\n# model\ndef model():\n    h_layer = HiddenLayer(input = x, n_in = 20000, n_out = 1000)\n    o_layer = OutputLayer(input = h_layer.output(), n_in = 1000, n_out = 4000)\n\n    # loss function\n    out = o_layer.output()\n    # modified cross entropy to explicit mathematical formula of sigmoid cross entropy loss\n    cross_entropy = -tf.reduce_sum( (  (y_*tf.log(out + 1e-9)) + ((1-y_) * tf.log(1 - out + 1e-9)) )  , name='xentropy' )    \n\n    # regularization\n    l2 = (tf.nn.l2_loss(h_layer.w) + tf.nn.l2_loss(o_layer.w))\n    lambda_2 = 0.01\n\n    # compute loss\n    loss = cross_entropy + lambda_2 * l2\n\n    # compute accuracy for single label classification task\n    correct_pred = tf.equal(tf.argmax(out, 1), tf.argmax(y, 1))\n    accuracy = tf.reduce_mean(tf.cast(correct_pred, \"float\"))\n\n    return loss, accuracy\n
\n", "system": "" }, { "instruction": "ImportError: cannot import name 'to_categorical' from 'keras.utils' (/usr/local/lib/python3.7/dist-packages/keras/utils/__init__.py)", "input": "", "output": "

use this

\n
from tensorflow.keras.utils import to_categorical\n
\n

instead of

\n

from keras.utils import to_categorical

\n", "system": "" }, { "instruction": "Extract target from Tensorflow PrefetchDataset", "input": "", "output": "

You can convert it to a list with list(ds) and then recompile it as a normal Dataset with tf.data.Dataset.from_tensor_slices(list(ds)). From there your nightmare begins again but at least it's a nightmare that other people have had before.

\n

Note that for more complex datasets (e.g. nested dictionaries) you will need more preprocessing after calling list(ds), but this should work for the example you asked about.

\n

This is far from a satisfying answer but unfortunately the class is entirely undocumented and none of the standard Dataset tricks work.

\n", "system": "" }, { "instruction": "Should I use @tf.function for all functions?", "input": "", "output": "

TLDR: It depends on your function and whether you are in production or development. Don't use tf.function if you want to be able to debug your function easily, or if it falls under the limitations of AutoGraph or tf.v1 code compatibility.\nI would highly recommend watching the Inside TensorFlow talks about AutoGraph and Functions, not Sessions.

\n\n

In the following I'll break down the reasons, which are all taken from information made available online by Google.

\n\n

In general, the tf.function decorator causes a function to be compiled as a callable that executes a TensorFlow graph. This entails:

\n\n\n\n

There is detailed information available on the design ideas behind this.

\n\n

Benefits of decorating a function with tf.function

\n\n

General benefits

\n\n\n\n

For functions with Python code / Using AutoGraph via tf.function decoration

\n\n

If you want to use AutoGraph, using tf.function is highly recommended over calling AutoGraph directly.\nReasons for this include: Automatic control dependencies, it is required for some APIs, more caching, and exception helpers (Source).

\n\n

Drawbacks of decorating a function with tf.function

\n\n

General drawbacks

\n\n\n\n

For functions with Python code / Using AutoGraph via tf.function decoration

\n\n\n\n

Detailed information on AutoGraph limitations is available.

\n\n

For functions with tf.v1 code

\n\n\n\n

For functions with tf.v2 code

\n\n\n\n

Examples of limitations

\n\n

Creating variables more than once

\n\n

It is not allowed to create variables more than once, such as v in the following example:

\n\n
@tf.function\ndef f(x):\n    v = tf.Variable(1)\n    return tf.add(x, v)\n\nf(tf.constant(2))\n\n# => ValueError: tf.function-decorated function tried to create variables on non-first call.\n
\n\n

In the following code, this is mitigated by making sure that self.v is only created once:

\n\n
class C(object):\n    def __init__(self):\n        self.v = None\n    @tf.function\n    def f(self, x):\n        if self.v is None:\n            self.v = tf.Variable(1)\n        return tf.add(x, self.v)\n\nc = C()\nprint(c.f(tf.constant(2)))\n\n# => tf.Tensor(3, shape=(), dtype=int32)\n
\n\n

Hidden side effects not captured by AutoGraph

\n\n

Changes such as to self.a in this example can't be hidden, which leads to an error since cross-function analysis is not done (yet) (Source):

\n\n
class C(object):\n    def change_state(self):\n        self.a += 1\n\n    @tf.function\n    def f(self):\n        self.a = tf.constant(0)\n        if tf.constant(True):\n            self.change_state() # Mutation of self.a is hidden\n        tf.print(self.a)\n\nx = C()\nx.f()\n\n# => InaccessibleTensorError: The tensor 'Tensor(\"add:0\", shape=(), dtype=int32)' cannot be accessed here: it is defined in another function or code block. Use return values, explicit Python locals or TensorFlow collections to access it. Defined in: FuncGraph(name=cond_true_5, id=5477800528); accessed from: FuncGraph(name=f, id=5476093776).\n
\n\n

Changes in plain sight are no problem:

\n\n
class C(object):\n    @tf.function\n    def f(self):\n        self.a = tf.constant(0)\n        if tf.constant(True):\n            self.a += 1 # Mutation of self.a is in plain sight\n        tf.print(self.a)\n\nx = C()\nx.f()\n\n# => 1\n
\n\n

Example of limitation due to TF control flow

\n\n

This if statement leads to an error because the value for else needs to be defined for TF control flow:

\n\n
@tf.function\ndef f(a, b):\n    if tf.greater(a, b):\n        return tf.constant(1)\n\n# If a <= b would return None\nx = f(tf.constant(3), tf.constant(2))   \n\n# => ValueError: A value must also be returned from the else branch. If a value is returned from one branch of a conditional a value must be returned from all branches.\n
\n", "system": "" }, { "instruction": "How to use Model.fit which supports generators (after fit_generator deprecation)", "input": "", "output": "

Model.fit_generator is deprecated starting from tensorflow 2.1.0 which is currently is in rc1.\nYou can find the documentation for tf-2.1.0-rc1 here: https://www.tensorflow.org/versions/r2.1/api_docs/python/tf/keras/Model#fit

\n\n

As you can see the first argument of the Model.fit can take a generator so just pass it your generator.

\n", "system": "" }, { "instruction": "How to train a model in nodejs (tensorflow.js)?", "input": "", "output": "

First of all, the images needs to be converted to tensors. The first approach would be to create a tensor containing all the features (respectively a tensor containing all the labels). This should the way to go only if the dataset contains few images.

\n
  const imageBuffer = await fs.readFile(feature_file);\n  tensorFeature = tfnode.node.decodeImage(imageBuffer) // create a tensor for the image\n\n  // create an array of all the features\n  // by iterating over all the images\n  tensorFeatures = tf.stack([tensorFeature, tensorFeature2, tensorFeature3])\n
\n

The labels would be an array indicating the type of each image

\n
 labelArray = [0, 1, 2] // maybe 0 for dog, 1 for cat and 2 for birds\n
\n

One needs now to create a hot encoding of the labels

\n
 tensorLabels = tf.oneHot(tf.tensor1d(labelArray, 'int32'), 3);\n
\n

Once there is the tensors, one would need to create the model for training. Here is a simple model.

\n
const model = tf.sequential();\nmodel.add(tf.layers.conv2d({\n  inputShape: [height, width, numberOfChannels], // numberOfChannels = 3 for colorful images and one otherwise\n  filters: 32,\n  kernelSize: 3,\n  activation: 'relu',\n}));\nmodel.add(tf.layers.flatten());\nmodel.add(tf.layers.dense({units: 3, activation: 'softmax'}));\n
\n

Then the model can be trained

\n
model.fit(tensorFeatures, tensorLabels)\n
\n

If the dataset contains a lot of images, one would need to create a tfDataset instead. This answer discusses why.

\n
const genFeatureTensor = image => {\n      const imageBuffer = await fs.readFile(feature_file);\n      return tfnode.node.decodeImage(imageBuffer)\n}\n\nconst labelArray = indice => Array.from({length: numberOfClasses}, (_, k) => k === indice ? 1 : 0)\n\nfunction* dataGenerator() {\n  const numElements = numberOfImages;\n  let index = 0;\n  while (index < numFeatures) {\n    const feature = genFeatureTensor(imagePath);\n    const label = tf.tensor1d(labelArray(classImageIndex))\n    index++;\n    yield {xs: feature, ys: label};\n  }\n}\n\nconst ds = tf.data.generator(dataGenerator).batch(1) // specify an appropriate batchsize;\n
\n

And use model.fitDataset(ds) to train the model

\n
\n

The above is for training in nodejs. To do such a processing in the browser, genFeatureTensor can be written as follow:

\n
function loadImage(url){\n  return new Promise((resolve, reject) => {\n    const im = new Image()\n        im.crossOrigin = 'anonymous'\n        im.src = 'url'\n        im.onload = () => {\n          resolve(im)\n        }\n   })\n}\n\ngenFeatureTensor = image => {\n  const img = await loadImage(image);\n  return tf.browser.fromPixels(image);\n}\n
\n

One word of caution is that doing heavy processing might block the main thread in the browser. This is where web workers come into play.

\n", "system": "" }, { "instruction": "ImportError: Could not find 'cudart64_100.dll", "input": "", "output": "

The simplest way to fix is to install the latest \u2018NVIDIA GPU Computing Toolkit\u2019, because if it's not there, you'll be missing the 'cudart64_100.dll' library.\nThe only issue is that the latest copy of CUDA has this particular library upgraded to 'cudart64_101.dll', while the latest TensorFlow still requires the older 'cudart64_100.dll'.\nAnyways, one way to deal with this issue is to install the latest CUDA + CUDA from September 2018 and then copy 'cudart64_100.dll' library from old install to the new one.

\n\n

Or just visit my site where I linked the 'cudart64_100.dll' library downloaded from the CUDA Toolkit 10.0 (Sept 2018), to make it easier to copy it into the latest CUDA directory.

\n\n

Here are some screenshots to illustrate the process: https://www.joe0.com/2019/10/19/how-resolve-tensorflow-2-0-error-could-not-load-dynamic-library-cudart64_100-dll-dlerror-cudart64_100-dll-not-found/

\n", "system": "" }, { "instruction": "ValueError: Duplicate plugins for name projector", "input": "", "output": "

If you have two versions of tensorboard installed in your system,you need to uninstall one of them.

\n

I was stuck this for hours but I finally fixed it using:

\n

Worked like a charm:\nhttps://github.com/pytorch/pytorch/issues/22676

\n
pip uninstall tb-nightly tensorboardX tensorboard\npip install tensorboard\n
\n", "system": "" }, { "instruction": "How to quantize all nodes except a particular one?", "input": "", "output": "

EDIT: the previous answer refered to Tensorflow Lite code. I updated it to refer to Tensorflow.

\n

Looking at the implementation of Tensorflow's quantize_weights, these are the instances where weights don't get quantized:

\n
    \n
  1. tensor that is not type float
  2. \n
  3. tensor that has fewer than 1024 weights (or another number specified by the parameter minimum_size)
  4. \n
\n

If you are able to modify nodes in the graph so that they are excluded by one of the above rules, then quantize, then revert the nodes to the pre-quantized state, you might be able to do this.

\n", "system": "" }, { "instruction": "how to store numpy arrays as tfrecord?", "input": "", "output": "

The function _floats_feature described in the Tensorflow-Guide expects a scalar (either float32 or float64) as input.

\n\n
def _float_feature(value):\n  \"\"\"Returns a float_list from a float / double.\"\"\"\n  return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))\n
\n\n

As you can see the inputted scalar is written into a list (value=[value]) which is subsequently given to tf.train.FloatList as input. tf.train.FloatList expects an iterator that outputs a float in each iteration (as the list does).

\n\n

If your feature is not a scalar but a vectur, _float_feature can be rewritten to pass the iterator directly to tf.train.FloatList (instead of putting it into a list first).

\n\n
def _float_array_feature(value):\n  return tf.train.Feature(float_list=tf.train.FloatList(value=value))\n
\n\n

However if your feature has two or more dimensions this solution does not work anymore. Like @mmry described in his answer in this case flattening your feature or splitting it into several one-dimensional features would be a solution. The disadvantage of these two approaches is that the information about the actual shape of the feature is lost if no further effort is invested.

\n\n

Another possibility to write an example message for a higher dimensional array is to convert the array into a byte string and then use the _bytes_feature function described in the Tensorflow-Guide to write the example message for it. The example message is then serialized and written into a TFRecord file.

\n\n
import tensorflow as tf\nimport numpy as np\n\ndef _bytes_feature(value):\n    \"\"\"Returns a bytes_list from a string / byte.\"\"\"\n    if isinstance(value, type(tf.constant(0))): # if value ist tensor\n        value = value.numpy() # get value of tensor\n    return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))\n\n\ndef serialize_array(array):\n  array = tf.io.serialize_tensor(array)\n  return array\n\n\n#----------------------------------------------------------------------------------\n# Create example data\narray_blueprint = np.arange(4, dtype='float64').reshape(2,2)\narrays = [array_blueprint+1, array_blueprint+2, array_blueprint+3]\n\n#----------------------------------------------------------------------------------\n# Write TFrecord file\nfile_path = 'data.tfrecords'\nwith tf.io.TFRecordWriter(file_path) as writer:\n  for array in arrays:\n    serialized_array = serialize_array(array)\n    feature = {'b_feature': _bytes_feature(serialized_array)}\n    example_message = tf.train.Example(features=tf.train.Features(feature=feature))\n    writer.write(example_message.SerializeToString())\n
\n\n

The serialized example messages stored in the TFRecord file can be accessed via tf.data.TFRecordDataset. After the example messages have been parsed, the original array needs to be restored from the byte string it was converted to. This is possible via tf.io.parse_tensor.

\n\n
# Read TFRecord file\ndef _parse_tfr_element(element):\n  parse_dic = {\n    'b_feature': tf.io.FixedLenFeature([], tf.string), # Note that it is tf.string, not tf.float32\n    }\n  example_message = tf.io.parse_single_example(element, parse_dic)\n\n  b_feature = example_message['b_feature'] # get byte string\n  feature = tf.io.parse_tensor(b_feature, out_type=tf.float64) # restore 2D array from byte string\n  return feature\n\n\ntfr_dataset = tf.data.TFRecordDataset('data.tfrecords') \nfor serialized_instance in tfr_dataset:\n  print(serialized_instance) # print serialized example messages\n\ndataset = tfr_dataset.map(_parse_tfr_element)\nfor instance in dataset:\n  print()\n  print(instance) # print parsed example messages with restored arrays\n
\n", "system": "" }, { "instruction": "Create keras callback to save model predictions and targets for each batch during training", "input": "", "output": "\n\n

NOTE: this answer is outdated and only works with TF1. Check @bers's answer for a solution tested on TF2.

\n\n
\n\n

After model compilation, the placeholder tensor for y_true is in model.targets and y_pred is in model.outputs.

\n\n

To save the values of these placeholders at each batch, you can:

\n\n
    \n
  1. First copy the values of these tensors into variables.
  2. \n
  3. Evaluate these variables in on_batch_end, and store the resulting arrays.
  4. \n
\n\n

Now step 1 is a bit involved because you'll have to add an tf.assign op to the training function model.train_function. Using current Keras API, this can be done by providing a fetches argument to K.function() when the training function is constructed.

\n\n

In model._make_train_function(), there's a line:

\n\n
self.train_function = K.function(inputs,\n                                 [self.total_loss] + self.metrics_tensors,\n                                 updates=updates,\n                                 name='train_function',\n                                 **self._function_kwargs)\n
\n\n

The fetches argument containing the tf.assign ops can be provided via model._function_kwargs (only works after Keras 2.1.0).

\n\n

As an example:

\n\n
from keras.layers import Dense\nfrom keras.models import Sequential\nfrom keras.callbacks import Callback\nfrom keras import backend as K\nimport tensorflow as tf\nimport numpy as np\n\nclass CollectOutputAndTarget(Callback):\n    def __init__(self):\n        super(CollectOutputAndTarget, self).__init__()\n        self.targets = []  # collect y_true batches\n        self.outputs = []  # collect y_pred batches\n\n        # the shape of these 2 variables will change according to batch shape\n        # to handle the \"last batch\", specify `validate_shape=False`\n        self.var_y_true = tf.Variable(0., validate_shape=False)\n        self.var_y_pred = tf.Variable(0., validate_shape=False)\n\n    def on_batch_end(self, batch, logs=None):\n        # evaluate the variables and save them into lists\n        self.targets.append(K.eval(self.var_y_true))\n        self.outputs.append(K.eval(self.var_y_pred))\n\n# build a simple model\n# have to compile first for model.targets and model.outputs to be prepared\nmodel = Sequential([Dense(5, input_shape=(10,))])\nmodel.compile(loss='mse', optimizer='adam')\n\n# initialize the variables and the `tf.assign` ops\ncbk = CollectOutputAndTarget()\nfetches = [tf.assign(cbk.var_y_true, model.targets[0], validate_shape=False),\n           tf.assign(cbk.var_y_pred, model.outputs[0], validate_shape=False)]\nmodel._function_kwargs = {'fetches': fetches}  # use `model._function_kwargs` if using `Model` instead of `Sequential`\n\n# fit the model and check results\nX = np.random.rand(10, 10)\nY = np.random.rand(10, 5)\nmodel.fit(X, Y, batch_size=8, callbacks=[cbk])\n
\n\n

Unless the number of samples can be divided by the batch size, the final batch will have a different size than other batches. So K.variable() and K.update() can't be used in this case. You'll have to use tf.Variable(..., validate_shape=False) and tf.assign(..., validate_shape=False) instead.

\n\n
\n\n

To verify the correctness of the saved arrays, you can add one line in training.py to print out the shuffled index array:

\n\n
if shuffle == 'batch':\n    index_array = _batch_shuffle(index_array, batch_size)\nelif shuffle:\n    np.random.shuffle(index_array)\n\nprint('Index array:', repr(index_array))  # Add this line\n\nbatches = _make_batches(num_train_samples, batch_size)\n
\n\n

The shuffled index array should be printed out during fitting:

\n\n
\nEpoch 1/1\nIndex array: array([8, 9, 3, 5, 4, 7, 1, 0, 6, 2])\n10/10 [==============================] - 0s 23ms/step - loss: 0.5670\n
\n\n

And you can check if cbk.targets is the same as Y[index_array]:

\n\n
index_array = np.array([8, 9, 3, 5, 4, 7, 1, 0, 6, 2])\nprint(Y[index_array])\n[[ 0.75325592  0.64857277  0.1926653   0.7642865   0.38901153]\n [ 0.77567689  0.13573623  0.4902501   0.42897559  0.55825652]\n [ 0.33760938  0.68195038  0.12303088  0.83509441  0.20991668]\n [ 0.98367778  0.61325065  0.28973401  0.28734073  0.93399794]\n [ 0.26097574  0.88219054  0.87951941  0.64887846  0.41996446]\n [ 0.97794604  0.91307569  0.93816428  0.2125808   0.94381495]\n [ 0.74813435  0.08036688  0.38094272  0.83178364  0.16713736]\n [ 0.52609421  0.39218962  0.21022047  0.58569125  0.08012982]\n [ 0.61276627  0.20679494  0.24124858  0.01262245  0.0994412 ]\n [ 0.6026137   0.25620512  0.7398164   0.52558182  0.09955769]]\n\nprint(cbk.targets)\n[array([[ 0.7532559 ,  0.64857274,  0.19266529,  0.76428652,  0.38901153],\n        [ 0.77567691,  0.13573623,  0.49025011,  0.42897558,  0.55825651],\n        [ 0.33760938,  0.68195039,  0.12303089,  0.83509439,  0.20991668],\n        [ 0.9836778 ,  0.61325067,  0.28973401,  0.28734073,  0.93399793],\n        [ 0.26097575,  0.88219053,  0.8795194 ,  0.64887846,  0.41996446],\n        [ 0.97794604,  0.91307569,  0.93816429,  0.2125808 ,  0.94381493],\n        [ 0.74813437,  0.08036689,  0.38094273,  0.83178365,  0.16713737],\n        [ 0.5260942 ,  0.39218962,  0.21022047,  0.58569127,  0.08012982]], dtype=float32),\n array([[ 0.61276627,  0.20679495,  0.24124858,  0.01262245,  0.0994412 ],\n        [ 0.60261369,  0.25620511,  0.73981643,  0.52558184,  0.09955769]], dtype=float32)]\n
\n\n

As you can see, there are two batches in cbk.targets (one \"full batch\" of size 8 and the final batch of size 2), and the row order is the same as Y[index_array].

\n", "system": "" }, { "instruction": "What is the difference of static Computational Graphs in tensorflow and dynamic Computational Graphs in Pytorch?", "input": "", "output": "

Both frameworks operate on tensors and view any model as a directed acyclic graph (DAG), but they differ drastically on how you can define them.

\n\n

TensorFlow follows \u2018data as code and code is data\u2019 idiom. In TensorFlow you define graph statically before a model can run. All communication with outer world is performed via tf.Session object and tf.Placeholder which are tensors that will be substituted by external data at runtime.

\n\n

In PyTorch things are way more imperative and dynamic: you can define, change and execute nodes as you go, no special session interfaces or placeholders. Overall, the framework is more tightly integrated with Python language and feels more native most of the times. When you write in TensorFlow sometimes you feel that your model is behind a brick wall with several tiny holes to communicate over. Anyways, this still sounds like a matter of taste more or less.

\n\n

However, those approaches differ not only in a software engineering perspective: there are several dynamic neural network architectures that can benefit from the dynamic approach. Recall RNNs: with static graphs, the input sequence length will stay constant. This means that if you develop a sentiment analysis model for English sentences you must fix the sentence length to some maximum value and pad all smaller sequences with zeros. Not too convenient, huh. And you will get more problems in the domain of recursive RNNs and tree-RNNs. Currently Tensorflow has limited support for dynamic inputs via Tensorflow Fold. PyTorch has it by-default.

\n\n

Reference:

\n\n

https://medium.com/towards-data-science/pytorch-vs-tensorflow-spotting-the-difference-25c75777377b

\n\n

https://www.reddit.com/r/MachineLearning/comments/5w3q74/d_so_pytorch_vs_tensorflow_whats_the_verdict_on/

\n", "system": "" }, { "instruction": "Difference between `apply_gradients` and `minimize` of optimizer in tensorflow", "input": "", "output": "

You can easily know from the link : https://www.tensorflow.org/get_started/get_started\n(tf.train API part) that they actually do the same job.\nThe difference it that: if you use the separated functions( tf.gradients, tf.apply_gradients), you can apply other mechanism between them, such as gradient clipping.

\n", "system": "" }, { "instruction": "How to turn off dropout for testing in Tensorflow?", "input": "", "output": "

The easiest way is to change the keep_prob parameter using a placeholder_with_default:

\n\n
prob = tf.placeholder_with_default(1.0, shape=())\nlayer = tf.nn.dropout(layer, prob)\n
\n\n

in this way when you train you can set the parameter like this:

\n\n
sess.run(train_step, feed_dict={prob: 0.5})\n
\n\n

and when you evaluate the default value of 1.0 is used.

\n", "system": "" }, { "instruction": "Are tf.layers.dense() and tf.contrib.layers.fully_connected() interchangeable?", "input": "", "output": "

They are essentially the same, the later calling the former.

\n\n

However tf.contrib.fully_connected adds a few functionalities on top of dense, in particular the possibility to pass a normalization and an activation in the parameters, \u00e0 la Keras. As noted by @wordforthewise, mind that the later defaults to tf.nn.relu.

\n\n

More generally, the TF API proposes (and mixes somewhat confusingly) low- and hi-level APIs; more on that here.

\n", "system": "" }, { "instruction": "What do the options in ConfigProto like allow_soft_placement and log_device_placement mean?", "input": "", "output": "

If you look at the API of ConfigProto, on line 278, you will see this:

\n\n
  // Whether soft placement is allowed. If allow_soft_placement is true,\n  // an op will be placed on CPU if\n  //   1. there's no GPU implementation for the OP\n  // or\n  //   2. no GPU devices are known or registered\n  // or\n  //   3. need to co-locate with reftype input(s) which are from CPU.\n  bool allow_soft_placement = 7;\n
\n\n

What this really means is that if you do something like this without allow_soft_placement=True, TensorFlow will throw an error.

\n\n
with tf.device('/gpu:0'):\n    # some op that doesn't have a GPU implementation\n
\n\n

Right below it, you will see on line 281:

\n\n
  // Whether device placements should be logged.\n  bool log_device_placement = 8;\n
\n\n

When log_device_placement=True, you will get a verbose output of something like this:

\n\n
2017-07-03 01:13:59.466748: I tensorflow/core/common_runtime/simple_placer.cc:841] Placeholder_1: (Placeholder)/job:localhost/replica:0/task:0/cpu:0\nPlaceholder: (Placeholder): /job:localhost/replica:0/task:0/cpu:0\n2017-07-03 01:13:59.466765: I tensorflow/core/common_runtime/simple_placer.cc:841] Placeholder: (Placeholder)/job:localhost/replica:0/task:0/cpu:0\nVariable/initial_value: (Const): /job:localhost/replica:0/task:0/cpu:0\n2017-07-03 01:13:59.466783: I tensorflow/core/common_runtime/simple_placer.cc:841] Variable/initial_value: (Const)/job:localhost/replica:0/task:0/cpu:0\n
\n\n

You can see where each operation is mapped to. For this case, they are all mapped to /cpu:0, but if you're in a distributed setting, there would be many more devices.

\n", "system": "" }, { "instruction": "Error: Failed to load the native TensorFlow runtime", "input": "", "output": "

My code worked perfectly after executing this line:

\n
pip install tensorflow --upgrade --force-reinstall\n
\n", "system": "" }, { "instruction": "Output from TensorFlow `py_func` has unknown rank/shape", "input": "", "output": "

Since py_func can execute arbitrary Python code and output anything, TensorFlow can't figure out the shape (it would require analyzing Python code of function body) You can instead give the shape manually

\n\n
y.set_shape(inp.get_shape())\n
\n", "system": "" }, { "instruction": "What is an epoch in TensorFlow?", "input": "", "output": "

An epoch, in Machine Learning, is the entire processing by the learning algorithm of the entire train-set.

\n\n

The MNIST train set is composed by 55000 samples.\nOnce the algorithm processed all those 55000 samples an epoch is passed.

\n", "system": "" }, { "instruction": "Tensorflow: How to convert scalar tensor to scalar variable in python?", "input": "", "output": "

In Tensorflow 2.0+, it's as simple as:

\n\n
my_tensor.numpy()\n
\n", "system": "" }, { "instruction": "How to fix MatMul Op has type float64 that does not match type float32 TypeError?", "input": "", "output": "

The tf.matmul() op does not perform automatic type conversions, so both of its inputs must have the same element type. The error message you are seeing indicates that you have a call to tf.matmul() where the first argument has type tf.float32, and the second argument has type tf.float64. You must convert one of the inputs to match the other, for example using tf.cast(x, tf.float32).

\n\n

Looking at your code, I don't see anywhere that a tf.float64 tensor is explicitly created (the default dtype for floating-point values in the TensorFlow Python API—e.g. for tf.constant(37.0)—is tf.float32). I would guess that the errors are caused by the np.loadtxt(filename) calls, which might be loading an np.float64 array. You can explicitly change them to load np.float32 arrays (which are converted to tf.float32 tensors) as follows:

\n\n
initial = np.loadtxt(filename).astype(np.float32)\n
\n", "system": "" }, { "instruction": "Tensorflow Different ways to Export and Run graph in C++", "input": "", "output": "

Here's my solution utilizing the V2 checkpoints introduced in TF 0.12.

\n\n

There's no need to convert all variables to constants or freeze the graph.

\n\n

Just for clarity, a V2 checkpoint looks like this in my directory models:

\n\n
checkpoint  # some information on the name of the files in the checkpoint\nmy-model.data-00000-of-00001  # the saved weights\nmy-model.index  # probably definition of data layout in the previous file\nmy-model.meta  # protobuf of the graph (nodes and topology info)\n
\n\n

Python part (saving)

\n\n
with tf.Session() as sess:\n    tf.train.Saver(tf.trainable_variables()).save(sess, 'models/my-model')\n
\n\n

If you create the Saver with tf.trainable_variables(), you can save yourself some headache and storage space. But maybe some more complicated models need all data to be saved, then remove this argument to Saver, just make sure you're creating the Saver after your graph is created. It is also very wise to give all variables/layers unique names, otherwise you can run in different problems.

\n\n

Python part (inference)

\n\n
with tf.Session() as sess:\n    saver = tf.train.import_meta_graph('models/my-model.meta')\n    saver.restore(sess, tf.train.latest_checkpoint('models/'))\n    outputTensors = sess.run(outputOps, feed_dict=feedDict)\n
\n\n

C++ part (inference)

\n\n

Note that checkpointPath isn't a path to any of the existing files, just their common prefix. If you mistakenly put there path to the .index file, TF won't tell you that was wrong, but it will die during inference due to uninitialized variables.

\n\n
#include <tensorflow/core/public/session.h>\n#include <tensorflow/core/protobuf/meta_graph.pb.h>\n\nusing namespace std;\nusing namespace tensorflow;\n\n...\n// set up your input paths\nconst string pathToGraph = \"models/my-model.meta\"\nconst string checkpointPath = \"models/my-model\";\n...\n\nauto session = NewSession(SessionOptions());\nif (session == nullptr) {\n    throw runtime_error(\"Could not create Tensorflow session.\");\n}\n\nStatus status;\n\n// Read in the protobuf graph we exported\nMetaGraphDef graph_def;\nstatus = ReadBinaryProto(Env::Default(), pathToGraph, &graph_def);\nif (!status.ok()) {\n    throw runtime_error(\"Error reading graph definition from \" + pathToGraph + \": \" + status.ToString());\n}\n\n// Add the graph to the session\nstatus = session->Create(graph_def.graph_def());\nif (!status.ok()) {\n    throw runtime_error(\"Error creating graph: \" + status.ToString());\n}\n\n// Read weights from the saved checkpoint\nTensor checkpointPathTensor(DT_STRING, TensorShape());\ncheckpointPathTensor.scalar<std::string>()() = checkpointPath;\nstatus = session->Run(\n        {{ graph_def.saver_def().filename_tensor_name(), checkpointPathTensor },},\n        {},\n        {graph_def.saver_def().restore_op_name()},\n        nullptr);\nif (!status.ok()) {\n    throw runtime_error(\"Error loading checkpoint from \" + checkpointPath + \": \" + status.ToString());\n}\n\n// and run the inference to your liking\nauto feedDict = ...\nauto outputOps = ...\nstd::vector<tensorflow::Tensor> outputTensors;\nstatus = session->Run(feedDict, outputOps, {}, &outputTensors);\n
\n", "system": "" }, { "instruction": "Tensor with unspecified dimension in tensorflow", "input": "", "output": "

As Ishamael says, all tensors have a static shape, which is known at graph construction time and accessible using Tensor.get_shape(); and a dynamic shape, which is only known at runtime and is accessible by fetching the value of the tensor, or passing it to an operator like tf.shape. In many cases, the static and dynamic shapes are the same, but they can be different - the static shape can be partially defined - in order allow the dynamic shape to vary from one step to the next.

\n\n

In your code normal_dist has a partially-defined static shape, because w_shape is a computed value. (TensorFlow sometimes attempts to evaluate\nthese computed values at graph construction time, but it gets stuck at tf.pack.) It infers the shape TensorShape([Dimension(None), Dimension(None)]), which means \"a matrix with an unknown number of rows and columns,\" because it knowns that w_shape is a vector of length 2, so the resulting normal_dist must be 2-dimensional.

\n\n

You have two options to deal with this. You can set the static shape as Ishamael suggests, but this requires you to know the shape at graph construction time. For example, the following may work:

\n\n
normal_dist.set_shape([input_data.get_shape()[1], labels.get_shape()[1]])\n
\n\n

Alternatively, you can pass validate_shape=False to the tf.Variable constructor. This allows you to create a variable with a partially-defined shape, but it limits the amount of static shape information that can be inferred later on in the graph.

\n", "system": "" }, { "instruction": "Printing the loss during TensorFlow training", "input": "", "output": "

You can fetch the value of cross_entropy by adding it to the list of arguments to sess.run(...). For example, your for-loop could be rewritten as follows:

\n\n
for i in range(100):\n    batch_xs, batch_ys = mnist.train.next_batch(100)\n    cross_entropy = -tf.reduce_sum(y_ * tf.log(y))\n    _, loss_val = sess.run([train_step, cross_entropy],\n                           feed_dict={x: batch_xs, y_: batch_ys})\n    print 'loss = ' + loss_val\n
\n\n

The same approach can be used to print the current value of a variable. Let's say, in addition to the value of cross_entropy, you wanted to print the value of a tf.Variable called W, you could do the following:

\n\n
for i in range(100):\n    batch_xs, batch_ys = mnist.train.next_batch(100)\n    cross_entropy = -tf.reduce_sum(y_ * tf.log(y))\n    _, loss_val, W_val = sess.run([train_step, cross_entropy, W],\n                                  feed_dict={x: batch_xs, y_: batch_ys})\n    print 'loss = %s' % loss_val\n    print 'W = %s' % W_val\n
\n", "system": "" }, { "instruction": "TensorFlow libdevice not found. Why is it not found in the searched path?", "input": "", "output": "

The following worked for me. With error message:

\n
error: Can't find libdevice directory ${CUDA_DIR}/nvvm/libdevice\n
\n

Firstly I searched for nvvm directory and then verified that libdevice directory existed:

\n
$ find / -type d -name nvvm 2>/dev/null\n/usr/lib/cuda/nvvm\n$ cd /usr/lib/cuda/nvvm\n/usr/lib/cuda/nvvm$ ls\nlibdevice\n/usr/lib/cuda/nvvm$ cd libdevice\n/usr/lib/cuda/nvvm/libdevice$ ls\nlibdevice.10.bc\n
\n

Then I exported the environment variable:

\n
export XLA_FLAGS=--xla_gpu_cuda_data_dir=/usr/lib/cuda\n
\n

as shown by @Insectatorious above. This solved the error and I was able to run the code.

\n", "system": "" }, { "instruction": "ModuleNotFoundError: No module named 'tensorflow_core.estimator' for tensorflow 2.1.0", "input": "", "output": "

TL;DR: Just solved this issue by making sure that both tensorflow and tensorflow-estimator were in the same version. (in my case, I needed to downgrade tensorflow-estimator, so conda install tensorflow-estimator=2.1.0 solved it for me)

\n

As you may have noticed, some tensorflow versions do not play well with certain GPUs, so I would first check some of the available builds with conda search tensorflow; then I would make sure that the chosen tensorflow build can actually recognize my GPU (with tf.config.list_physical_devices('GPU')); finally, I would search for a matching tensorflow-estimator build with conda search tensorflow-estimator and only then install it with conda install tensorflow-estimator=<chosen version> -n <my_venv>.

\n

It should be noted, however, that all these steps are mostly useful if you have interest in using your GPU. If that is not the case, than upgrading both packages (or downgrading/upgrading them so their versions match) may be the way.

\n", "system": "" }, { "instruction": "tensorflow warning - Found untraced functions such as lstm_cell_6_layer_call_and_return_conditional_losses", "input": "", "output": "

I think this warning can be safely ignored as you can find the same warning even in a tutorial given by tensorflow. I often see this warning when saving custom models such as graph NNs. You should be good to go as long as you don't want to access those non-callable functions.

\n

However, if you're annoyed by this big chunk of text, you can suppress this warning by adding the following at the top of the code.

\n
import absl.logging\nabsl.logging.set_verbosity(absl.logging.ERROR)\n
\n", "system": "" }, { "instruction": "ValueError: Shapes (None, 1) and (None, 3) are incompatible", "input": "", "output": "

The first problem is with the LSTM input_shape. input_shape = (20,85,1).

\n\n

From the doc: https://keras.io/layers/recurrent/

\n\n

LSTM layer expects 3D tensor with shape (batch_size, timesteps, input_dim).

\n\n

model.add(tf.keras.layers.Dense(nb_classes, activation='softmax')) - this suggets you're doing a multi-class classification.

\n\n

So, you need your y_train and y_test have to be one-hot-encoded. That means they must have dimension (number_of_samples, 3), where 3 denotes number of classes.

\n\n

You need to apply tensorflow.keras.utils.to_categorical to them.

\n\n
y_train = to_categorical(y_train, 3)\ny_test = to_categorical(y_test, 3)\n
\n\n

ref: https://www.tensorflow.org/api_docs/python/tf/keras/utils/to_categorical

\n\n

tf.keras.callbacks.History() - this callback is automatically applied to every Keras model. The History object gets returned by the fit method of models.

\n\n

ref: https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/History

\n", "system": "" }, { "instruction": "how to programmatically determine available GPU memory with tensorflow?", "input": "", "output": "

This code will return free GPU memory in MegaBytes for each GPU:

\n
import subprocess as sp\nimport os\n\ndef get_gpu_memory():\n    command = "nvidia-smi --query-gpu=memory.free --format=csv"\n    memory_free_info = sp.check_output(command.split()).decode('ascii').split('\\n')[:-1][1:]\n    memory_free_values = [int(x.split()[0]) for i, x in enumerate(memory_free_info)]\n    return memory_free_values\n\nget_gpu_memory()\n
\n

This answer relies on nvidia-smi being installed (which is pretty much always the case for Nvidia GPUs) and therefore is limited to NVidia GPUs.

\n", "system": "" }, { "instruction": "Non-deterministic behavior of TensorFlow while_loop()", "input": "", "output": "

Most likely, your problem is stemming from seeding issues, make sure that you set a seed for both random.seed(), and for numpy.random.seed(). You'll need to seed both, as numpy's random seed is independent from the random random state.

\n", "system": "" }, { "instruction": "Tensorflow: Multiple loss functions vs Multiple training ops", "input": "", "output": "

I want to make a subtle point that I don't think was made in previous answers.

\n\n

If you were using something like GradientDescentOptimizer, these would be very similar operations. That's because taking gradients is a linear operation, and the gradient of a sum is the same as the sum of the gradients.

\n\n

But, ADAM does something special: regardless of the scale of your loss, it scales the gradients so that they're always on the order of your learning rate. If you multiplied your loss by 1000, it wouldn't affect ADAM, because the change it would be normalized away.

\n\n

So, if your two losses are roughly the same magnitude, then it shouldn't make a difference. If one is much larger than the other, then keep in mind that summing before the minimization will essentially ignore the small one, while making two ops will spend equal effort minimizing both.

\n\n

I personally like dividing them up, which gives you more control over how much to focus on one loss or the other. For example, if it was multi-task learning, and one task was more important to get right than the other, two ops with different learning rates roughly accomplishes this.

\n", "system": "" }, { "instruction": "Keras replacing input layer", "input": "", "output": "

When you saved your model using:

\n\n
old_model.save('my_model.h5')\n
\n\n

it will save following:

\n\n
    \n
  1. The architecture of the model, allowing to create the model.
  2. \n
  3. The weights of the model.
  4. \n
  5. The training configuration of the model (loss, optimizer).
  6. \n
  7. The state of the optimizer, allowing training to resume from where you left before.
  8. \n
\n\n

So then, when you load the model:

\n\n
res50_model = load_model('my_model.h5')\n
\n\n

you should get the same model back, you can verify the same using:

\n\n
res50_model.summary()\nres50_model.get_weights()\n
\n\n

Now you can, pop the input layer and add your own using:

\n\n
res50_model.layers.pop(0)\nres50_model.summary()\n
\n\n

add new input layer:

\n\n
newInput = Input(batch_shape=(0,299,299,3))    # let us say this new InputLayer\nnewOutputs = res50_model(newInput)\nnewModel = Model(newInput, newOutputs)\n\nnewModel.summary()\nres50_model.summary()\n
\n", "system": "" }, { "instruction": "How to fix ipykernel_launcher.py: error: unrecognized arguments in jupyter?", "input": "", "output": "

A more elegant solution would be:

\n\n
args, unknown = parser.parse_known_args()\n
\n\n

instead of

\n\n
args = parser.parse_args()\n
\n", "system": "" }, { "instruction": "Difference between tf.data.Dataset.map() and tf.data.Dataset.apply()", "input": "", "output": "

The difference is that map will execute one function on every element of the Dataset separately, whereas apply will execute one function on the whole Dataset at once (such as group_by_window given as example in the documentation).

\n\n

The argument of apply is a function that takes a Dataset and returns a Dataset when the argument of map is a function that takes one element and returns one transformed element.

\n", "system": "" }, { "instruction": "Keras confusion about number of layers", "input": "", "output": "

Your first one consists of a 100 neurons input layer connected to one single output neuron

\n\n

Your second one consists of a 100 neurons input layer, one hidden layer of 32 neurons and one output layer of one single neuron.

\n\n

You have to think of your first layer as your input layer (with the same number of neurons as the dimenson, so 100 for you) connected to another layer with as many neuron as you specify (1 in your first case, 32 in the second one)

\n\n

In Keras what is useful is the command

\n\n
model.summary()\n
\n", "system": "" }, { "instruction": "How to install Tensorflow on Python 2.7 on Windows?", "input": "", "output": "

If you only need TensorFlow because of Keras and your are on Python 2.7.x, you can avoid installing Tensorflow(Google) and replace it by CNTK(Microsoft). According to Jeong-Yoon Lee CNTK is a lot (about 2 to 4 times) faster than TensorFlow for LSTM (Bidirectional LSTM on IMDb Data and Text Generation via LSTM), while speeds for other type of neural networks are close to each other.\nYour Keras code does not need to be modified (I checked it with 2 examples of Keras using TensorFlow and succesfully replaced TensorFlow with CNTK, without changing anything the Keras code.

\n\n

So how do you install it?

\n\n

-CPU-only version of CNTK:

\n\n
\n

pip install\n https://cntk.ai/PythonWheel/CPU-Only/cntk-2.4-cp27-cp27m-win_amd64.whl

\n
\n\n

-GPU version of CNTK:

\n\n
\n

pip install\n https://cntk.ai/PythonWheel/GPU/cntk-2.4-cp27-cp27m-win_amd64.whl

\n
\n\n

-Test CNTK install:

\n\n
\n

python -c \"import cntk; print(cntk.version)\"

\n
\n\n

-Install Keras: The Python Deep Learning library

\n\n
\n

pip install keras

\n
\n\n

-Enable CNTK as Keras back end iso TensorFlow

\n\n

modify the \"keras.json\" file under %USERPROFILE%/.keras

\n\n
{\n    \"epsilon\": 1e-07, \n    \"image_data_format\": \"channels_last\", \n    \"backend\": \"cntk\", \n    \"floatx\": \"float32\" \n}\n
\n", "system": "" }, { "instruction": "You must feed a value for placeholder tensor 'Placeholder' with dtype float", "input": "", "output": "

Some questions

\n\n

first
\nwhy you use sess = tf.InteractiveSession() and with tf.Session() as sess: at same time, just curious

\n\n

second\nwhat is your placeholder name x or images?
\nif name is x, {images: x_data...} won't feed x_data to x, it override(?) images
\nI think feed_dict should be {x: x_data...}

\n\n

if name is images,do you have two images in your program, placeholder and shuffle data, try to modify name of variable

\n", "system": "" }, { "instruction": "Keras verbose training progress bar writing a new line on each batch issue", "input": "", "output": "

I've added built-in support for keras in tqdm so you could use it instead (pip install "tqdm>=4.41.0"):

\n
from tqdm.keras import TqdmCallback\n...\nmodel.fit(..., verbose=0, callbacks=[TqdmCallback(verbose=2)])\n
\n

This turns off keras' progress (verbose=0), and uses tqdm instead. For the callback, verbose=2 means separate progressbars for epochs and batches. 1 means clear batch bars when done. 0 means only show epochs (never show batch bars).

\n

If there are problems with it please open an issue at https://github.com/tqdm/tqdm/issues

\n", "system": "" }, { "instruction": "what does the question mark in tensorflow shape mean?", "input": "", "output": "

It means that first dimension is not fixed in the graph and it can vary between run calls

\n", "system": "" }, { "instruction": "The print of string constant is always attached with 'b' inTensorFlow", "input": "", "output": "

Use sess.run(hello).decode() because it is a bytestring. decode method will return the string.

\n\n

Your print statement must look like

\n\n
print(sess.run(hello).decode())\n
\n", "system": "" }, { "instruction": "Obtaining total number of records from .tfrecords file in Tensorflow", "input": "", "output": "

To count the number of records, you should be able to use tf.python_io.tf_record_iterator.

\n\n
c = 0\nfor fn in tf_records_filenames:\n  for record in tf.python_io.tf_record_iterator(fn):\n     c += 1\n
\n\n

To just keep track of the model training, tensorboard comes in handy.

\n", "system": "" }, { "instruction": "Keras Maxpooling2d layer gives ValueError", "input": "", "output": "

Quoting an answer mentioned in github, you need to specify the dimension ordering:

\n\n

Keras is a wrapper over Theano or Tensorflow libraries. Keras uses the setting variable image_dim_ordering to decide if the input layer is Theano or Tensorflow format. This setting can be specified in 2 ways -

\n\n
    \n
  1. specify 'tf' or 'th' in ~/.keras/keras.json like so - image_dim_ordering: 'th'. Note: this is a json file.
  2. \n
  3. or specify the image_dim_ordering in your model like so: model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering=\"th\"))
  4. \n
\n\n

Update: Apr 2020 Keras 2.2.5 link seems to have an updated API where dim_ordering is changed to data_format so:

\n\n

keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format='channels_first') to get NCHW or use channels_last to get NHWC

\n\n

Appendix: image_dim_ordering in 'th' mode the channels dimension (the depth) is at index 1 (e.g. 3, 256, 256). In 'tf' mode is it at index 3 (e.g. 256, 256, 3). Quoting @naoko from comments.

\n", "system": "" }, { "instruction": "Tensorflow Documentation", "input": "", "output": "

Do not Google about Tensorflow documentation, use the TensorFlow Python reference documentation and ctrl + f

\n", "system": "" }, { "instruction": "How to suppress verbose Tensorflow logging?", "input": "", "output": "

2.0 Update (10/8/19)\nSetting TF_CPP_MIN_LOG_LEVEL should still work (see below in v0.12+ update), but there is currently an issue open (see issue #31870). If setting TF_CPP_MIN_LOG_LEVEL does not work for you (again, see below), try doing the following to set the log level:

\n\n
import tensorflow as tf\ntf.get_logger().setLevel('INFO')\n
\n\n

In addition, please see the documentation on tf.autograph.set_verbosity which sets the verbosity of autograph log messages - for example:

\n\n
# Can also be set using the AUTOGRAPH_VERBOSITY environment variable\ntf.autograph.set_verbosity(1)\n
\n\n

v0.12+ Update (5/20/17), Working through TF 2.0+:

\n\n

In TensorFlow 0.12+, per this issue, you can now control logging via the environmental variable called TF_CPP_MIN_LOG_LEVEL; it defaults to 0 (all logs shown) but can be set to one of the following values under the Level column.

\n\n
  Level | Level for Humans | Level Description                  \n -------|------------------|------------------------------------ \n  0     | DEBUG            | [Default] Print all messages       \n  1     | INFO             | Filter out INFO messages           \n  2     | WARNING          | Filter out INFO & WARNING messages \n  3     | ERROR            | Filter out all messages      \n
\n\n

See the following generic OS example using Python:

\n\n
import os\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'  # or any {'0', '1', '2'}\nimport tensorflow as tf\n
\n\n

To be thorough, you call also set the level for the Python tf_logging module, which is used in e.g. summary ops, tensorboard, various estimators, etc.

\n\n
# append to lines above\ntf.logging.set_verbosity(tf.logging.ERROR)  # or any {DEBUG, INFO, WARN, ERROR, FATAL}\n
\n\n

For 1.14 you will receive warnings if you do not change to use the v1 API as follows:

\n\n
# append to lines above\ntf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)  # or any {DEBUG, INFO, WARN, ERROR, FATAL}\n
\n\n


\nFor Prior Versions of TensorFlow or TF-Learn Logging (v0.11.x or lower):

\n\n

View the page below for information on TensorFlow logging; with the new update, you're able to set the logging verbosity to either DEBUG, INFO, WARN, ERROR, or FATAL. For example:

\n\n
tf.logging.set_verbosity(tf.logging.ERROR)\n
\n\n

The page additionally goes over monitors which can be used with TF-Learn models. Here is the page.

\n\n

This doesn't block all logging, though (only TF-Learn). I have two solutions; one is a 'technically correct' solution (Linux) and the other involves rebuilding TensorFlow.

\n\n
script -c 'python [FILENAME].py' | grep -v 'I tensorflow/'\n
\n\n

For the other, please see this answer which involves modifying source and rebuilding TensorFlow.

\n", "system": "" }, { "instruction": "Confused by the behavior of `tf.cond`", "input": "", "output": "

TL;DR: If you want tf.cond() to perform a side effect (like an assignment) in one of the branches, you must create the op that performs the side effect inside the function that you pass to tf.cond().

\n

The behavior of tf.cond() is a little unintuitive. Because execution in a TensorFlow graph flows forward through the graph, all operations that you refer to in either branch must execute before the conditional is evaluated. This means that both the true and the false branches receive a control dependency on the tf.assign() op, and so y always gets set to 2, even if pred is False.

\n

The solution is to create the tf.assign() op inside the function that defines the true branch. For example, you could structure your code as follows:

\n
pred = tf.placeholder(tf.bool, shape=[])\nx = tf.Variable([1])\ndef update_x_2():\n  with tf.control_dependencies([tf.assign(x, [2])]):\n    return tf.identity(x)\ny = tf.cond(pred, update_x_2, lambda: tf.identity(x))\nwith tf.Session() as session:\n  session.run(tf.initialize_all_variables())\n  print(y.eval(feed_dict={pred: False}))  # ==> [1]\n  print(y.eval(feed_dict={pred: True}))   # ==> [2]\n
\n", "system": "" }, { "instruction": "Could not load dynamic library 'libcudart.so.11.0';", "input": "", "output": "

Firstly:\nCan you find out where the "libcudart.so.11.0" is\nIf you lost it at error stack, you can replace the "libcudart.so.11.0" by your word in below:

\n
sudo find / -name 'libcudart.so.11.0'\n
\n

Outputs in my system. This result shows where the "libcudart.so.11.0" is in the system:

\n
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudart.so.11.0\n
\n

If the result shows nothing, please make sure you have install cuda or other staff that must install in your system.

\n

Second, add the path to environment file.

\n
# edit /etc/profile\nsudo vim /etc/profile\n# append path to "LD_LIBRARY_PATH" in profile file\nexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.1/targets/x86_64-linux/lib\n# make environment file work\nsource /etc/profile\n
\n

You may also refer to this link

\n

Third thing you may try is:

\n
conda install cudatoolkit\n
\n", "system": "" }, { "instruction": "Could not load dynamic library 'libcublas.so.10'; dlerror: libcublas.so.10: cannot open shared object file: No such file or directory;", "input": "", "output": "

On Ubuntu 20.04, you can simply install NVIDIAs cuda toolkit cuda:

\n
sudo apt-get update\nsudo apt install nvidia-cuda-toolkit\n
\n

There are also install advices for Windows.

\n

The packge is around 1GB and it took a while to install... Some minutes later you need to export PATH variables so that it can be found:

\n
    \n
  1. Find Shared Object
  2. \n
\n
sudo find / -name 'libcudart.so*'\n\n/usr/lib/x86_64-linux-gnu/libcudart.so.10.1\n/usr/lib/x86_64-linux-gnu/libcudart.so\n
\n
    \n
  1. Add the folder to path, so that python finds it
  2. \n
\n
export PATH=/usr/lib/x86_64-linux-gnu${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}\n
\n
    \n
  1. Permissions
  2. \n
\n
sudo chmod a+r /usr/lib/x86_64-linux-gnu/libcuda*\n
\n

Helped me

\n", "system": "" }, { "instruction": "How does TensorFlow SparseCategoricalCrossentropy work?", "input": "", "output": "

SparseCategoricalCrossentropy and CategoricalCrossentropy both compute categorical cross-entropy. The only difference is in how the targets/labels should be encoded.

\n\n

When using SparseCategoricalCrossentropy the targets are represented by the index of the category (starting from 0). Your outputs have shape 4x2, which means you have two categories. Therefore, the targets should be a 4 dimensional vector with entries that are either 0 or 1. For example:

\n\n
scce = tf.keras.losses.SparseCategoricalCrossentropy();\nLoss = scce(\n  tf.constant([ 0,    0,    0,    1   ], tf.float32),\n  tf.constant([[1,2],[3,4],[5,6],[7,8]], tf.float32))\n
\n\n

This in contrast to CategoricalCrossentropy where the labels should be one-hot encoded:

\n\n
cce = tf.keras.losses.CategoricalCrossentropy();\nLoss = cce(\n  tf.constant([ [1,0]    [1,0],    [1, 0],   [0, 1]   ], tf.float32),\n  tf.constant([[1,2],[3,4],[5,6],[7,8]], tf.float32))\n
\n\n

SparseCategoricalCrossentropy is more efficient when you have a lot of categories.

\n", "system": "" }, { "instruction": "Why does keras model predict slower after compile?", "input": "", "output": "

UPDATE - 1/15/2020: the current best practice for small batch sizes should be to feed inputs to the model directly - i.e. preds = model(x), and if layers behave differently at train / inference, model(x, training=False). Per latest commit, this is now documented.

\n\n

I haven't benchmarked these, but per the Git discussion, it's also worth trying predict_on_batch() - especially with improvements in TF 2.1.

\n\n
\n\n

ULTIMATE CULPRIT: self._experimental_run_tf_function = True. It's experimental. But it's not actually bad.

\n\n

To any TensorFlow devs reading: clean up your code. It's a mess. And it violates important coding practices, such as one function does one thing; _process_inputs does a lot more than \"process inputs\", same for _standardize_user_data. \"I'm not paid enough\" - but you do pay, in extra time spent understanding your own stuff, and in users filling your Issues page with bugs easier resolved with a clearer code.

\n\n
\n\n

SUMMARY: it's only a little slower with compile().

\n\n

compile() sets an internal flag which assigns a different prediction function to predict. This function constructs a new graph upon each call, slowing it down relative to uncompiled. However, the difference is only pronounced when train time is much shorter than data processing time. If we increase the model size to at least mid-sized, the two become equal. See code at the bottom.

\n\n

This slight increase in data processing time is more than compensated by amplified graph capability. Since it's more efficient to keep only one model graph around, the one pre-compile is discarded. Nonetheless: if your model is small relative to data, you are better off without compile() for model inference. See my other answer for a workaround.

\n\n
\n\n

WHAT SHOULD I DO?

\n\n

Compare model performance compiled vs uncompiled as I have in code at the bottom.

\n\n\n\n

Yes, both are possible, and it will depend on (1) data size; (2) model size; (3) hardware. Code at the bottom actually shows compiled model being faster, but 10 iterations is a small sample. See \"workarounds\" in my other answer for the \"how-to\".

\n\n
\n\n

DETAILS:

\n\n

This took a while to debug, but was fun. Below I describe the key culprits I discovered, cite some relevant documentation, and show profiler results that led to the ultimate bottleneck.

\n\n

(FLAG == self.experimental_run_tf_function, for brevity)

\n\n
    \n
  1. Model by default instantiates with FLAG=False. compile() sets it to True.
  2. \n
  3. predict() involves acquiring the prediction function, func = self._select_training_loop(x)
  4. \n
  5. Without any special kwargs passed to predict and compile, all other flags are such that:\n\n
  6. \n
  7. From source code docstring, (A) is heavily graph-reliant, uses more distribution strategy, and ops are prone to creating & destroying graph elements, which \"may\" (do) impact performance.
  8. \n
\n\n

True culprit: _process_inputs(), accounting for 81% of runtime. Its major component? _create_graph_function(), 72% of runtime. This method does not even exist for (B). Using a mid-sized model, however, _process_inputs comprises less than 1% of runtime. Code at bottom, and profiling results follow.

\n\n
\n\n

DATA PROCESSORS:

\n\n

(A): <class 'tensorflow.python.keras.engine.data_adapter.TensorLikeDataAdapter'>, used in _process_inputs() . Relevant source code

\n\n

(B): numpy.ndarray, returned by convert_eager_tensors_to_numpy. Relevant source code, and here

\n\n
\n\n

MODEL EXECUTION FUNCTION (e.g. predict)

\n\n

(A): distribution function, and here

\n\n

(B): distribution function (different), and here

\n\n
\n\n

PROFILER: results for code in my other answer, \"tiny model\", and in this answer, \"medium model\":

\n\n

Tiny model: 1000 iterations, compile()

\n\n

\n\n

Tiny model: 1000 iterations, no compile()

\n\n

\n\n

Medium model: 10 iterations

\n\n

\n\n
\n\n

DOCUMENTATION (indirectly) on effects of compile(): source

\n\n
\n

Unlike other TensorFlow operations, we don't convert python\n numerical inputs to tensors. Moreover, a new graph is generated for each\n distinct python numerical value, for example calling g(2) and g(3) will\n generate two new graphs

\n \n

function instantiates a separate graph for every unique set of input\n shapes and datatypes. For example, the following code snippet will result\n in three distinct graphs being traced, as each input has a different\n shape

\n \n

A single tf.function object might need to map to multiple computation graphs\n under the hood. This should be visible only as performance (tracing graphs has\n a nonzero computational and memory cost) but should not affect the correctness\n of the program

\n
\n\n
\n\n

COUNTEREXAMPLE:

\n\n
from tensorflow.keras.layers import Input, Dense, LSTM, Bidirectional, Conv1D\nfrom tensorflow.keras.layers import Flatten, Dropout\nfrom tensorflow.keras.models import Model\nimport numpy as np\nfrom time import time\n\ndef timeit(func, arg, iterations):\n    t0 = time()\n    for _ in range(iterations):\n        func(arg)\n    print(\"%.4f sec\" % (time() - t0))\n\nbatch_size = 32\nbatch_shape = (batch_size, 400, 16)\nipt   = Input(batch_shape=batch_shape)\nx     = Bidirectional(LSTM(512, activation='relu', return_sequences=True))(ipt)\nx     = LSTM(512, activation='relu', return_sequences=True)(ipt)\nx     = Conv1D(128, 400, 1, padding='same')(x)\nx     = Flatten()(x)\nx     = Dense(256, activation='relu')(x)\nx     = Dropout(0.5)(x)\nx     = Dense(128, activation='relu')(x)\nx     = Dense(64,  activation='relu')(x)\nout   = Dense(1,  activation='sigmoid')(x)\nmodel = Model(ipt, out)\n\nX = np.random.randn(*batch_shape)\ntimeit(model.predict, X, 10)\nmodel.compile('adam', loss='binary_crossentropy')\ntimeit(model.predict, X, 10)\n
\n\n

Outputs:

\n\n
34.8542 sec\n34.7435 sec\n
\n", "system": "" }, { "instruction": "How to understand masked multi-head attention in transformer", "input": "", "output": "

I had the very same question after reading the Transformer paper. I found no complete and detailed answer to the question in the Internet so I'll try to explain my understanding of Masked Multi-Head Attention.

\n

The short answer is - we need masking to make the training parallel. And the parallelization is good as it allows the model to train faster. I've also made a video with explanation of this mechanism.

\n

Here's an example explaining the idea. Let's say we train to translate "I love you" to German. The encoder works in parallel mode - it can produce vector representation of the input sequence ("I love you") within a constant number of steps (i.e. the number of steps doesn't depend on the length of the input sequence).

\n

Let's say the encoder produces the numbers 11, 12, 13 as the vector representations of the input sequence. In reality these vectors will be much longer but for simplicity we use the short ones. Also for simplicity we ignore the service tokens, like - beginning of the sequence, - end of the sequence and others.

\n

During the training we know that the translation should be "Ich liebe dich" (we always know the expected output during the training). Let's say the expected vector representations of the "Ich liebe dich" words are 21, 22, 23.

\n

If we make the decoder training in sequential mode, it'll look like the training of the Recurrent Neural Network. The following sequential steps will be performed:

\n\n

This means we'll need to make 3 sequential operations (in general case - a sequential operation per each input). Also we'll have an accumulating error on each next iteration. Also we don't use attention as we only look to a single previous output.

\n

As we actually know the expected outputs we can adjust the process and make it parallel. There's no need to wait for the previous step output.

\n\n

This algorithm can be executed in parallel and also it doesn't accumulate the error. And this algorithm uses attention (i.e. looks to all previous inputs) thus has more information about the context to consider while making the prediction.

\n

And here is where we need the masking. The training algorithm knows the entire expected output (21, 22, 23). It hides (masks) a part of this known output sequence for each of the parallel operations.

\n\n

Masking itself is implemented as the following (from the original paper):

\n
\n

We implement this inside of scaled dot-product attention by masking\nout (setting to \u2212\u221e) all values in the input of the softmax which\ncorrespond to illegal connections

\n
\n

Note: during the inference (not training) the decoder works in the sequential (not parallel) mode as it doesn't know the output sequence initially. But it's different from RNN approach as Transformer inference still uses self-attention and looks at all previous outputs (but not only the very previous one).

\n

Note 2: I've seen in some materials that masking can be used differently for non-translation applications. For example, for language modeling the masking can be used to hide some words from the input sentence and the model will try to predict them during the training using other, non-masked words (i.e. learn to understand the context).

\n", "system": "" }, { "instruction": "How to clear GPU memory WITHOUT restarting runtime in Google Colaboratory (Tensorflow)", "input": "", "output": "

in tensorflow you van use numba library:

\n
!pip install numba\n\nfrom numba import cuda \ndevice = cuda.get_current_device()\ndevice.reset()\n
\n", "system": "" }, { "instruction": "ValueError: Can't convert non-rectangular Python sequence to Tensor", "input": "", "output": "

I'm not sure whether they exist in TensorFlow 1 but TensorFlow 2.0 supports RaggedTensors, which the documentation describes as "... the TensorFlow equivalent of nested variable-length lists."

\n

I think it would be trivial to convert your data to RaggedTensors. It might even be as easy as:

\n
data_tensor = tf.ragged.constant(data)\n
\n

Example:

\n
>>> a = tf.ragged.constant([[1],[2,3]])\n>>> a\n<tf.RaggedTensor [[1], [2, 3]]>\n
\n", "system": "" }, { "instruction": "model.summary() can't print output shape while using subclass model", "input": "", "output": "

I have used this method to solve this problem, I don't know if there is an easier way.

\n\n
class subclass(Model):\n    def __init__(self):\n        ...\n    def call(self, x):\n        ...\n\n    def model(self):\n        x = Input(shape=(24, 24, 3))\n        return Model(inputs=[x], outputs=self.call(x))\n\n\n\nif __name__ == '__main__':\n    sub = subclass()\n    sub.model().summary()\n
\n", "system": "" }, { "instruction": "Is there a version of TensorFlow not compiled for AVX instructions?", "input": "", "output": "

A best practices approach suggested by peter-cordes is to see what gcc is going to make of your 'what capabilities your cpu has' by issuing the following:

\n\n
gcc -O3 -fverbose-asm -march=native -xc /dev/null -S -o- | less\n
\n\n

This command will provide information (all) about your cpu capabilities from the view of gcc, whom is going to do the build, so gcc's view matters.

\n\n

When does this come up? When a program offers to tailor itself to your cpu. Dang. What do I know about my cpu. Well, the above line will tell you all you need to know.

\n\n

That said, generally, people/developers that are promoting cpu based capabilities will state or suggest a list of things that go faster/better/stronger if your cpu has *. And the above will give you *. Read carefully what you see. If you don't have it, you don't want it, i.e.

\n\n
-mno-avx(whatever you don't want;in my case it was avx)\n
\n\n

A good overview of install of CPU capable on older cpu(s) is provided by\nMikael Fernandez Simalango for Ubuntu 16.04 LTS. It assumes a python2.7 environ but easily translates to python3. The heart of the matter is extracting which cpu instruction extensions are available on your particular cpu that will be used in addition to -march=native via /proc/cpuinfo, (but note, it appears limited to what flags it accepts, so may be better to actually read through the instruction above and reflect)

\n\n
grep flags -m1 /proc/cpuinfo | cut -d \":\" -f 2 | tr '[:upper:]' \n'[:lower:]' | { read FLAGS; OPT=\"-march=native\"; for flag in $FLAGS; \ndo case \"$flag\" in \"sse4_1\" | \"sse4_2\" | \"ssse3\" | \"fma\" | \"cx16\" | \n\"popcnt\" | \"avx\" | \"avx2\") OPT+=\" -m$flag\";; esac; done; \nMODOPT=${OPT//_/\\.}; echo \"$MODOPT\"; }\n
\n\n

Running this on my old box output:

\n\n
-march=native -mssse3 -mcx16 -msse4.1 -msse4.2 -mpopcnt\n
\n\n

It gets part way there. What is not clear is how to say, 'not this' and 'not that', which for old CPUs would be, most likely, -mno-avx.

\n\n

For an old cpu, which -march matters and Nephanth very usefully addresses this:

\n\n
gcc -march=native -Q --help=target|grep march\n
\n\n

produces

\n\n
-march=                             westmere\n
\n\n

which means my response to the ./compile question should be or might be, and note the quotes 'westmere' which is also in the gcc docs so the ' ' must be there for a reason

\n\n
-march='westmere' -mssse3 -mcx16 -msse4.1 -msse4.2 -mpopcnt -mno-avx\n
\n\n

but this is probably much better (see discussion below):

\n\n
-march=native -mssse3 -mcx16 -msse4.1 -msse4.2 -mpopcnt -mno-avx\n
\n\n

The -mno-avx is an option for gcc, and results, after many hours, in

\n\n
Python 3.5.2 (default, Nov 23 2017, 16:37:01) \n[GCC 5.4.0 20160609] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more \ninformation.\n>>> import tensorflow as tf\n>>> \n>>> tf.__version__\n'2.0.0-alpha0'\n
\n\n

which looks like success.

\n\n

Restated:\nIn either order, find out what instructions are (or not) supported by your cpu, and state those explicitly.

\n", "system": "" }, { "instruction": "How to import keras.engine.topology in Tensorflow?", "input": "", "output": "

You can import Layer and InputSpec from TensorFlow as follows:

\n\n
from tensorflow.python.keras.layers import Layer, InputSpec\n
\n\n

UPDATE: 30/10/2019

\n\n
from tensorflow.keras.layers import Layer, InputSpec\n
\n", "system": "" }, { "instruction": "How can I test a .tflite model to prove that it behaves as the original model using the same Test Data?", "input": "", "output": "

You may use TensorFlow Lite Python interpreter to test your tflite model.

\n\n

It allows you to feed input data in python shell and read the output directly like you are just using a normal tensorflow model.

\n\n

I have answered this question here.

\n\n

And you can read this TensorFlow lite official guide for detailed information.

\n\n

You can also use Netron to visualize your model. It allows you to load your .tflite file directly and inspect your model architecture and model weights.

\n", "system": "" }, { "instruction": "shuffle in the model.fit of keras", "input": "", "output": "

It will shuffle your entire dataset (x, y and sample_weight together) first and then make batches according to the batch_size argument you passed to fit.

\n

Edit

\n

As @yuk pointed out in the comment, the code has been changed significantly since 2018. The documentation for the shuffle parameter now seems more clear on its own. You can choose to shuffle the entire training data or just shuffle the batch:

\n
        shuffle: Boolean (whether to shuffle the training data\n            before each epoch) or str (for 'batch'). This argument is ignored\n            when `x` is a generator. 'batch' is a special option for dealing\n            with the limitations of HDF5 data; it shuffles in batch-sized\n            chunks. Has no effect when `steps_per_epoch` is not `None`.\n
\n", "system": "" }, { "instruction": "Tensorflow object detection config files documentation", "input": "", "output": "

As mentioned in the configuration documentation, configuration files are just Protocol Buffers objects described in the .proto files under research/object_detection/protos. The top level object is a TrainEvalPipelineConfig defined in pipeline.proto, and different files describe each of the elements. For example, data_augmentation_options are PreprocessingStep objects, defined in preprocessor.proto (which in turn can include a range of other possible objects for different preprocessing tasks). The meaning of each object and field may or may not be obvious or well-documented, but you can always refer to the source code to see exactly how each value is being used (for example, check preprocessor.py to understand how data augmentation is done).

\n", "system": "" }, { "instruction": "Why do we use tf.name_scope()", "input": "", "output": "

They are not the same thing.

\n\n
import tensorflow as tf\nc1 = tf.constant(42)\nwith tf.name_scope('s1'):\n    c2 = tf.constant(42)\nprint(c1.name)\nprint(c2.name)\n
\n\n

prints

\n\n
Const:0\ns1/Const:0\n
\n\n

So as the name suggests, the scope functions create a scope for the names of the ops you create inside. This has an effect on how you refer to tensors, on reuse, on how the graph shows in TensorBoard and so on.

\n", "system": "" }, { "instruction": "Tensorflow VarLenFeature vs FixedLenFeature", "input": "", "output": "

You can load images probably beacause you saved them using feature type tf.train.BytesList() and whole image data is one big byte value inside a list.

\n

If I'm right you're using tf.decode_raw to get the data out of the image you load from TFRecord.

\n

Regarding example use cases:\nI use VarLenFeature for saving datasets for object detection task:\nThere's variable amount of bounding boxes per image (equal to object in image) therefore I need another feature objects_number to track amount of objects (and bboxes).\nEach bounding box itself is a list of 4 float coordinates

\n

I'm using following code to load it:

\n
features = tf.parse_single_example(\n    serialized_example,\n    features={\n        # We know the length of both fields. If not the\n        # tf.VarLenFeature could be used\n        'height': tf.FixedLenFeature([], tf.int64),\n        'width': tf.FixedLenFeature([], tf.int64),\n        'depth': tf.FixedLenFeature([], tf.int64),\n        # Label part\n        'objects_number': tf.FixedLenFeature([], tf.int64),\n        'bboxes': tf.VarLenFeature(tf.float32),\n        'labels': tf.VarLenFeature(tf.int64),\n        # Dense data\n        'image_raw': tf.FixedLenFeature([],tf.string)\n\n    })\n\n# Get metadata\nobjects_number = tf.cast(features['objects_number'], tf.int32)\nheight = tf.cast(features['height'], tf.int32)\nwidth = tf.cast(features['width'], tf.int32)\ndepth = tf.cast(features['depth'], tf.int32)\n\n# Actual data\nimage_shape = tf.parallel_stack([height, width, depth])\nbboxes_shape = tf.parallel_stack([objects_number, 4])\n\n# BBOX data is actually dense convert it to dense tensor\nbboxes = tf.sparse_tensor_to_dense(features['bboxes'], default_value=0)\n\n# Since information about shape is lost reshape it\nbboxes = tf.reshape(bboxes, bboxes_shape)\nimage = tf.decode_raw(features['image_raw'], tf.uint8)\nimage = tf.reshape(image, image_shape)\n
\n

Notice that "image_raw" is fixed length Feature (has one element) and holds values of type "bytes", however a value of "bytes" type can itself have variable size (its a string of bytes, and can have many symbols within it).\nSo "image_raw" is a list with ONE element of type "bytes", which can be super big.

\n

To further elaborate on how it works:\nFeatures are lists of values, those values have specific "type".

\n

Datatypes for features are subset of data types for tensors, you have:

\n\n

You can check here tensors data types.

\n

So you can store variable length data without VarLenFeatures at all (actually well you do it), but first you would need to convert it into bytes/string feature, and then decode it.\nAnd this is most common method.

\n", "system": "" }, { "instruction": "Tensorflow Slim: TypeError: Expected int32, got list containing Tensors of type '_Message' instead", "input": "", "output": "

I got the same problem when using the 1.0 released and I could make it work without having to roll back on a previous version.

\n\n

The problem is caused by change in the api. That discussion helped me to find the solution: Google group > \nRecent API Changes in TensorFlow

\n\n

You just have to update all the line with tf.concat

\n\n

for example

\n\n
net = tf.concat(3, [branch_0, branch_1, branch_2, branch_3])\n
\n\n

should be changed to

\n\n
net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3)\n
\n\n

Note:

\n\n

I was able to use the models without problem. But I still got error afterward when wanting to load the pretrained weight.\nSeems that the slim module got several changed since they made the checkpoint file. The graph created by the code and the one present in the checkpoint file were different.

\n\n

Note2:

\n\n

I was able to use the pretrain weights for inception_resnet_v2 by adding to all conv2d layer biases_initializer=None

\n", "system": "" }, { "instruction": "how to implement tensorflow's next_batch for own data", "input": "", "output": "

The link you posted says: \"we get a \"batch\" of one hundred random data points from our training set\". In my example I use a global function (not a method like in your example) so there will be a difference in syntax.

\n\n

In my function you'll need to pass the number of samples wanted and the data array.

\n\n

Here is the correct code, which ensures samples have correct labels:

\n\n
import numpy as np\n\ndef next_batch(num, data, labels):\n    '''\n    Return a total of `num` random samples and labels. \n    '''\n    idx = np.arange(0 , len(data))\n    np.random.shuffle(idx)\n    idx = idx[:num]\n    data_shuffle = [data[ i] for i in idx]\n    labels_shuffle = [labels[ i] for i in idx]\n\n    return np.asarray(data_shuffle), np.asarray(labels_shuffle)\n\nXtr, Ytr = np.arange(0, 10), np.arange(0, 100).reshape(10, 10)\nprint(Xtr)\nprint(Ytr)\n\nXtr, Ytr = next_batch(5, Xtr, Ytr)\nprint('\\n5 random samples')\nprint(Xtr)\nprint(Ytr)\n
\n\n

And a demo run:

\n\n
[0 1 2 3 4 5 6 7 8 9]\n[[ 0  1  2  3  4  5  6  7  8  9]\n [10 11 12 13 14 15 16 17 18 19]\n [20 21 22 23 24 25 26 27 28 29]\n [30 31 32 33 34 35 36 37 38 39]\n [40 41 42 43 44 45 46 47 48 49]\n [50 51 52 53 54 55 56 57 58 59]\n [60 61 62 63 64 65 66 67 68 69]\n [70 71 72 73 74 75 76 77 78 79]\n [80 81 82 83 84 85 86 87 88 89]\n [90 91 92 93 94 95 96 97 98 99]]\n\n5 random samples\n[9 1 5 6 7]\n[[90 91 92 93 94 95 96 97 98 99]\n [10 11 12 13 14 15 16 17 18 19]\n [50 51 52 53 54 55 56 57 58 59]\n [60 61 62 63 64 65 66 67 68 69]\n [70 71 72 73 74 75 76 77 78 79]]\n
\n", "system": "" }, { "instruction": "How to redirect TensorFlow logging to a file?", "input": "", "output": "
import logging\n\n# get TF logger\nlog = logging.getLogger('tensorflow')\nlog.setLevel(logging.DEBUG)\n\n# create formatter and add it to the handlers\nformatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')\n\n# create file handler which logs even debug messages\nfh = logging.FileHandler('tensorflow.log')\nfh.setLevel(logging.DEBUG)\nfh.setFormatter(formatter)\nlog.addHandler(fh)\n
\n\n

My solution is inspired by this thread.

\n", "system": "" }, { "instruction": "What's the difference between tensorflow dynamic_rnn and rnn?", "input": "", "output": "

From RNNs in Tensorflow, a Practical Guide and Undocumented Features by Denny Britz, published in August 21, 2016.

\n\n
\n

tf.nn.rnn creates an unrolled graph for a fixed RNN length. That\n means, if you call tf.nn.rnn with inputs having 200 time steps you are\n creating a static graph with 200 RNN steps. First, graph creation is\n slow. Second, you\u2019re unable to pass in longer sequences (> 200) than\n you\u2019ve originally specified.

\n \n

tf.nn.dynamic_rnn solves this. It uses a tf.While loop to dynamically\n construct the graph when it is executed. That means graph creation is\n faster and you can feed batches of variable size.

\n
\n", "system": "" }, { "instruction": "tensorflow deep neural network for regression always predict same results in one batch", "input": "", "output": "

Short answer:

\n\n

Transpose your pred vector using tf.transpose(pred).

\n\n

Longer answer:

\n\n

The problem is that pred (the predictions) and y (the labels) are not of the same shape: one is a row vector and the other a column vector. Apparently when you apply an element-wise operation on them, you'll get a matrix, which is not what you want.

\n\n

The solution is to transpose the prediction vector using tf.transpose() to get a proper vector and thus a proper loss function. Actually, if you set the batch size to 1 in your example you'll see that it works even without the fix, because transposing a 1x1 vector is a no-op.

\n\n

I applied this fix to your example code and observed the following behaviour. Before the fix:

\n\n
Epoch: 0245 cost= 84.743440580\n[*]----------------------------\nlabel value: 23 estimated value: [ 27.47437096]\nlabel value: 50 estimated value: [ 24.71126747]\nlabel value: 22 estimated value: [ 23.87785912]\n
\n\n

And after the fix at the same point in time:

\n\n
Epoch: 0245 cost= 4.181439120\n[*]----------------------------\nlabel value: 23 estimated value: [ 21.64333534]\nlabel value: 50 estimated value: [ 48.76105118]\nlabel value: 22 estimated value: [ 24.27996063]\n
\n\n

You'll see that the cost is much lower and that it actually learned the value 50 properly. You'll have to do some fine-tuning on the learning rate and such to improve your results of course.

\n", "system": "" }, { "instruction": "Does TensorFlow have cross validation implemented?", "input": "", "output": "

As already discussed, tensorflow doesn't provide its own way to cross-validate the model. The recommended way is to use KFold. It's a bit tedious, but doable. Here's a complete example of cross-validating MNIST model with tensorflow and KFold:

\n\n
from sklearn.model_selection import KFold\nimport tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\n\n# Parameters\nlearning_rate = 0.01\nbatch_size = 500\n\n# TF graph\nx = tf.placeholder(tf.float32, [None, 784])\ny = tf.placeholder(tf.float32, [None, 10])\nW = tf.Variable(tf.zeros([784, 10]))\nb = tf.Variable(tf.zeros([10]))\npred = tf.nn.softmax(tf.matmul(x, W) + b)\ncost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)\ncorrect_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\ninit = tf.global_variables_initializer()\n\nmnist = input_data.read_data_sets(\"data/mnist-tf\", one_hot=True)\ntrain_x_all = mnist.train.images\ntrain_y_all = mnist.train.labels\ntest_x = mnist.test.images\ntest_y = mnist.test.labels\n\ndef run_train(session, train_x, train_y):\n  print \"\\nStart training\"\n  session.run(init)\n  for epoch in range(10):\n    total_batch = int(train_x.shape[0] / batch_size)\n    for i in range(total_batch):\n      batch_x = train_x[i*batch_size:(i+1)*batch_size]\n      batch_y = train_y[i*batch_size:(i+1)*batch_size]\n      _, c = session.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})\n      if i % 50 == 0:\n        print \"Epoch #%d step=%d cost=%f\" % (epoch, i, c)\n\ndef cross_validate(session, split_size=5):\n  results = []\n  kf = KFold(n_splits=split_size)\n  for train_idx, val_idx in kf.split(train_x_all, train_y_all):\n    train_x = train_x_all[train_idx]\n    train_y = train_y_all[train_idx]\n    val_x = train_x_all[val_idx]\n    val_y = train_y_all[val_idx]\n    run_train(session, train_x, train_y)\n    results.append(session.run(accuracy, feed_dict={x: val_x, y: val_y}))\n  return results\n\nwith tf.Session() as session:\n  result = cross_validate(session)\n  print \"Cross-validation result: %s\" % result\n  print \"Test accuracy: %f\" % session.run(accuracy, feed_dict={x: test_x, y: test_y})\n
\n", "system": "" }, { "instruction": "What does opt.apply_gradients() do in TensorFlow?", "input": "", "output": "

The update rule that the apply_gradients method actually applies depends on the specific optimizer. Take a look at the implementation of apply_gradients in the tf.train.Optimizer class here. It relies on the derived classes implementing the update rule in the methods _apply_dense and _apply_spares. The update rule you are referring to is implemented by the GradientDescentOptimizer.

\n\n

Regarding your desired positive additive update: If what you are calling opt is an instantiation of GradientDescentOptimizer, then you could indeed achieve what you want to do by

\n\n
grads_and_vars = opt.compute_gradients(E, [v])\neta = opt._learning_rate\nmy_grads_and_vars = [(g-(1/eta)*p, v) for g, v in grads_and_vars]\nopt.apply_gradients(my_grads_and_vars)\n
\n\n

The more elegant way to do this is probably to write a new optimizer (inheriting from tf.train.Optimizer) that implements your desired update rule directly.

\n", "system": "" }, { "instruction": "ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization'", "input": "", "output": "

You should import BatchNormalization in following way:

\n
from tensorflow.keras.layers import BatchNormalization\n
\n", "system": "" }, { "instruction": "AttributeError: module 'tensorflow.python.keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'", "input": "", "output": "

change from keras import models to from tensorflow.keras import models
\nthis solved the problem for me with tensorflow 2.5.0

\n", "system": "" }, { "instruction": "Unexpected keyword argument 'ragged' in Keras", "input": "", "output": "

So I tried link above which you have mentioned teachable machine
\nAs it turns out model you have exported is from tensorflow.keras and not directly from keras API. These two are different. So while loading it might be using tf.ragged tensors that might not be compatible with keras API.
\n
Soulution to your issue:

\nDon't import keras directly as your model is saved with Tensorflow's keras high level api. Change all your imports to tensorflow.keras\n

Change:

\n\n
from keras.preprocessing.image import img_to_array\nfrom keras.models import load_model\n
\n\n

to this:

\n\n
from tensorflow.keras.preprocessing.image import img_to_array\nfrom tensorflow.keras.models import load_model\n
\n\n

It will solve your issue.

\n\n

EDIT :
\nAll of your imports, either should be from Keras or tensorflow.keras. Although being same API few things are different which creates these kind of issues. Also for tensorflow backend tf.keras is preferred, because Keras 2.3.0 is last major release which will support backends other than tensorflow.

\n\n
\n

This release brings the API in sync with the tf.keras API as of TensorFlow 2.0. However note that it does not support most TensorFlow 2.0 features, in particular eager execution. If you need these features, use tf.keras.\n This is also the last major release of multi-backend Keras. Going forward, we recommend that users consider switching their Keras code to tf.keras in TensorFlow 2.0.

\n
\n", "system": "" }, { "instruction": "How to input a list of lists with different sizes in tf.data.Dataset", "input": "", "output": "

You can use tf.data.Dataset.from_generator() to convert any iterable Python object (like a list of lists) into a Dataset:

\n\n
t = [[4, 2], [3, 4, 5]]\n\ndataset = tf.data.Dataset.from_generator(lambda: t, tf.int32, output_shapes=[None])\n\niterator = dataset.make_one_shot_iterator()\nnext_element = iterator.get_next()\n\nwith tf.Session() as sess:\n  print(sess.run(next_element))  # ==> '[4, 2]'\n  print(sess.run(next_element))  # ==> '[3, 4, 5]'\n
\n", "system": "" }, { "instruction": "ImportError: No module named 'keras'", "input": "", "output": "

Hi I have an solution try this if you are using Anaconda-Navigator

\n\n

go to Anaconda Environment and search keras package and then install.

\n\n

\"install

\n\n

\"enter

\n\n

after install just type import keras in shell its working.

\n\n

\"enter

\n", "system": "" }, { "instruction": "Loading SavedModel is a lot slower than loading a tf.train.Saver checkpoint", "input": "", "output": "

I am by no ways an expert in Tensorflow, but if I had to take a guess as to why this is happening, I would say that:

\n\n\n\n

Depending on the size of your graph, recreating everything that it contained might take some time.

\n\n

Concerning the second question, as @J H said, if there are no reasons for you to use one strategy over the other, and time is of the essence, then just go with the fastest one.

\n", "system": "" }, { "instruction": "Tensorflow: Cannot interpret feed_dict key as Tensor", "input": "", "output": "

This worked for me

\n\n
from keras import backend as K\n
\n\n

and after predicting my data i inserted this part of code\nthen i had again loaded the model.

\n\n
K.clear_session()\n
\n\n

i faced this problem in production server,\nbut in my pc it was running fine

\n\n

...........

\n\n
from keras import backend as K\n\n#Before prediction\nK.clear_session()\n\n#After prediction\nK.clear_session()\n
\n", "system": "" }, { "instruction": "What is the equivalent of np.std() in TensorFlow?", "input": "", "output": "

To get the mean and variance just use tf.nn.moments.

\n\n
mean, var = tf.nn.moments(x, axes=[1])\n
\n\n

For more on tf.nn.moments params see docs

\n", "system": "" }, { "instruction": "Cannot convert a partially converted tensor in TensorFlow", "input": "", "output": "

You just need to feed it in as a single example but in the batched shape. So that means adding an extra dimension to the shape e.g.

\n\n
batch_size = 32 # set this to the actual size of your batch\ntf.truncated_normal((batch_size, 784), mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)\n
\n\n

This way it will \"fit\" into the placeholder.

\n\n

If you expect batch_size to change you can also use:

\n\n
tf.truncated_normal(tf.shape(input_tensor), mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)\n
\n\n

Where input_tensor could be a placeholder or just whatever tensor is going to have this noise added to it.

\n", "system": "" }, { "instruction": "How to profile TensorFlow networks?", "input": "", "output": "

If you want to find how much time was spent on each operation at TF, you can do this in tensorboard using runtime statistics. You will need to do something like this (check the full example in the above-mentioned link):

\n\n
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)\nrun_metadata = tf.RunMetadata()\nsess.run(<values_you_want_to_execute>, options=run_options, run_metadata=run_metadata)\nyour_writer.add_run_metadata(run_metadata, 'step%d' % i)\n
\n\n
\n\n

Better than just printing it you can see it in tensorboard:

\n\n
\n

Additionally, clicking on a node will display the exact total memory,\n compute time, and tensor output sizes.

\n
\n\n

Also now tensorflow has a debugger. Here is a tutorial of how to use it.

\n\n

\"[Example

\n", "system": "" }, { "instruction": "Run Tensorflow unit tests", "input": "", "output": "

The easiest way to run the TensorFlow unit tests is using Bazel, assuming you have downloaded the source from Git:

\n\n
# All tests (for C++ changes).\n$ bazel test //tensorflow/...\n\n# All Python tests (for Python front-end changes).\n$ bazel test //tensorflow/python/...\n\n# All tests (with GPU support).\n$ bazel test -c opt --config=cuda //tensorflow/...\n$ bazel test -c opt --config=cuda //tensorflow/python/...\n
\n", "system": "" }, { "instruction": "ImportError: No module named 'keras'", "input": "", "output": "

Hi I have an solution try this if you are using Anaconda-Navigator

\n\n

go to Anaconda Environment and search keras package and then install.

\n\n

\"install

\n\n

\"enter

\n\n

after install just type import keras in shell its working.

\n\n

\"enter

\n", "system": "" }, { "instruction": "Tensorflow GradientTape "Gradients does not exist for variables" intermittently", "input": "", "output": "

The solution given by Nguy\u1ec5n and gkennos will suppress the error because it would replace all None by zeros.\nHowever, it is a big issue that your gradient is null at any point in time.\nThe problem described above is certainly caused by unconnected variables (by default PyTorch will throw runtime error).

\n

The most common case of unconnected layers can be exemplify as follow:

\n
 def some_func(x):\n       x1 = x * some variables\n       x2 = x1 + some variables #x2 discontinued after here\n       x3 = x1 / some variables\n       return x3\n
\n

Now observe that x2 is unconnected, so gradient will not be propagated throw it. Carefully debug your code for unconnected variables.

\n", "system": "" }, { "instruction": "Description of TF Lite's Toco converter args for quantization aware training", "input": "", "output": "

You should never need to manually set the quantization stats.

\n\n

Have you tried the post-training-quantization tutorials?

\n\n

https://www.tensorflow.org/lite/performance/post_training_integer_quant

\n\n

Basically they set the quantization options:

\n\n
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]\nconverter.inference_input_type = tf.uint8\nconverter.inference_output_type = tf.uint8\n
\n\n

Then they pass a \"representative dataset\" to the converter, so that the converter can run the model a few batches to gather the necessary statistics:

\n\n
def representative_data_gen():\n  for input_value in mnist_ds.take(100):\n    yield [input_value]\n\nconverter.representative_dataset = representative_data_gen\n
\n\n

While there are options for quantized training, it's always easier to to do post-training quantization.

\n", "system": "" }, { "instruction": "Graph optimizations on a tensorflow serveable created using tf.Estimator", "input": "", "output": "

The way to go from a SavedModel to a servable after running tensorflow graph transforms is to use the SavedModel Builder API.

\n

First, you need to create a SavedModel Builder object and then rebuild the graph you have just transformed, using the SavedModel Builder API.

\n

Next, you need to add the assets, signatures, and other meta-data back into the model. Finally, you need to call the SavedModel Builder API's save() method, which will save the model as a servable.

\n

This servable can then be used with tensorflow serving.

\n", "system": "" }, { "instruction": "Tensorboard Error: No dashboards are active for current data set", "input": "", "output": "

Your issue may be related to the drive you are attempting to start tensorboard from and the drive your logdir is on. Tensorboard uses a colon to separate the optional run name and the path in the logdir flag, so your path is being interpreted as \\path\\to\\output\\folder with name C.

\n\n

This can be worked around by either starting tensorboard from the same drive as your log directory or by providing an explicit run name, e.g. logdir=mylogs:C:\\path\\to\\output\\folder

\n\n

See here for reference to the issue.

\n", "system": "" }, { "instruction": "Is there any way to get variable importance with Keras?", "input": "", "output": "

*Edited to include relevant code to implement permutation importance.

\n\n

I answered a similar question at Feature Importance Chart in neural network using Keras in Python. It does implement what Teque5 mentioned above, namely shuffling the variable among your sample or permutation importance using the ELI5 package.

\n\n
from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor\nimport eli5\nfrom eli5.sklearn import PermutationImportance\n\ndef base_model():\n    model = Sequential()        \n    ...\n    return model\n\nX = ...\ny = ...\n\nmy_model = KerasRegressor(build_fn=basemodel, **sk_params)    \nmy_model.fit(X,y)\n\nperm = PermutationImportance(my_model, random_state=1).fit(X,y)\neli5.show_weights(perm, feature_names = X.columns.tolist())\n
\n", "system": "" }, { "instruction": "ValueError: Tensor must be from the same graph as Tensor with Bidirectinal RNN in Tensorflow", "input": "", "output": "

TensorFlow stores all operations on an operational graph. This graph defines what functions output to where, and it links it all together so that it can follow the steps you have set up in the graph to produce your final output. If you try to input a Tensor or operation on one graph into a Tensor or operation on another graph it will fail. Everything must be on the same execution graph.

\n\n

Try removing with tf.Graph().as_default():

\n\n

TensorFlow provides you a default graph which is referred to if you do not specify a graph. You are probably using the default graph in one spot and a different graph in your training block.

\n\n

There does not seem to be a reason you are specifying a graph as default here and most likely you are using separate graphs on accident. If you really want to specify a graph then you probably want to pass it as a variable, not set it like this.

\n", "system": "" }, { "instruction": "ImportError: No module named 'tensorflow.python'", "input": "", "output": "

Uninstall tensorflow:

\n\n
pip uninstall tensorflow\n
\n\n

Then reinstall it:

\n\n
pip install tensorflow\n
\n", "system": "" }, { "instruction": "Unable to open Tensorboard in browser", "input": "", "output": "

Had the same problem this morning. Solved it with

\n\n
tensorboard --logdir=data/ --host localhost --port 8088\n
\n\n

Navigated the browser to http://localhost:8088

\n", "system": "" }, { "instruction": "Flatten batch in tensorflow", "input": "", "output": "

You can do it easily with tf.reshape() without knowing the batch size.

\n\n
x = tf.placeholder(tf.float32, shape=[None, 9,2])\nshape = x.get_shape().as_list()        # a list: [None, 9, 2]\ndim = numpy.prod(shape[1:])            # dim = prod(9,2) = 18\nx2 = tf.reshape(x, [-1, dim])           # -1 means \"all\"\n
\n\n

The -1 in the last line means the whole column no matter what the batchsize is in the runtime. You can see it in tf.reshape().

\n\n
\n\n

Update: shape = [None, 3, None]

\n\n

Thanks @kbrose. For the cases where more than 1 dimension are undefined, we can use tf.shape() with tf.reduce_prod() alternatively.

\n\n
x = tf.placeholder(tf.float32, shape=[None, 3, None])\ndim = tf.reduce_prod(tf.shape(x)[1:])\nx2 = tf.reshape(x, [-1, dim])\n
\n\n

tf.shape() returns a shape Tensor which can be evaluated in runtime. The difference between tf.get_shape() and tf.shape() can be seen in the doc.

\n\n

I also tried tf.contrib.layers.flatten() in another . It is simplest for the first case, but it can't handle the second.

\n", "system": "" }, { "instruction": "TensorFlow - numpy-like tensor indexing", "input": "", "output": "

You can actually do that now with tf.gather_nd. Let's say you have a matrix m like the following:

\n\n
| 1 2 3 4 |\n| 5 6 7 8 |\n
\n\n

And you want to build a matrix r of size, let's say, 3x2, built from elements of m, like this:

\n\n
| 3 6 |\n| 2 7 |\n| 5 3 |\n| 1 1 |\n
\n\n

Each element of r corresponds to a row and column of m, and you can have matrices rows and cols with these indices (zero-based, since we are programming, not doing math!):

\n\n
       | 0 1 |         | 2 1 |\nrows = | 0 1 |  cols = | 1 2 |\n       | 1 0 |         | 0 2 |\n       | 0 0 |         | 0 0 |\n
\n\n

Which you can stack into a 3-dimensional tensor like this:

\n\n
| | 0 2 | | 1 1 | |\n| | 0 1 | | 1 2 | |\n| | 1 0 | | 2 0 | |\n| | 0 0 | | 0 0 | |\n
\n\n

This way, you can get from m to r through rows and cols as follows:

\n\n
import numpy as np\nimport tensorflow as tf\n\nm = np.array([[1, 2, 3, 4], [5, 6, 7, 8]])\nrows = np.array([[0, 1], [0, 1], [1, 0], [0, 0]])\ncols = np.array([[2, 1], [1, 2], [0, 2], [0, 0]])\n\nx = tf.placeholder('float32', (None, None))\nidx1 = tf.placeholder('int32', (None, None))\nidx2 = tf.placeholder('int32', (None, None))\nresult = tf.gather_nd(x, tf.stack((idx1, idx2), -1))\n\nwith tf.Session() as sess:\n    r = sess.run(result, feed_dict={\n        x: m,\n        idx1: rows,\n        idx2: cols,\n    })\nprint(r)\n
\n\n

Output:

\n\n
[[ 3.  6.]\n [ 2.  7.]\n [ 5.  3.]\n [ 1.  1.]]\n
\n", "system": "" }, { "instruction": "How to fix AttributeError: partially initialized module 'charset_normalizer' has no attribute 'md__mypyc' (most likely due to a circular import)", "input": "", "output": "

Your traceback doesn't show which version of the charset-normalizer package is installed.

\n

I got a similar error when training an xgboost model using Ray. I had charset-normalizer v3.0.1 installed. Upgrading it to v3.1.0 fixed the error.

\n

Try running

\n
pip install --force-reinstall charset-normalizer==3.1.0\n
\n

or simply

\n
pip install -U --force-reinstall charset-normalizer  \n
\n

Then rerun your code and see if that does the trick!

\n", "system": "" }, { "instruction": "Tensorflow GPU Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found", "input": "", "output": "Step 1\n
 Move to C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\bin\n
\nStep 2\n
Rename file cusolver64_11.dll  To  cusolver64_10.dll \n
\n

\"enter

\n
 cusolver64_10.dll \n
\n

\"enter

\n", "system": "" }, { "instruction": "Why is Tensorflow not recognizing my GPU after conda install?", "input": "", "output": "

August 2021 Conda install may be working now, as according to @ComputerScientist in the comments below, conda install tensorflow-gpu==2.4.1 will give cudatoolkit-10.1.243 and cudnn-7.6.5

\n

The following was written in Jan 2021 and is out of date

\n

Currently conda install tensorflow-gpu installs tensorflow v2.3.0 and does NOT install the conda cudnn or cudatoolkit packages. Installing them manually (e.g. with conda install cudatoolkit=10.1) does not seem to fix the problem either.

\n

A solution is to install an earlier version of tensorflow, which does install cudnn and cudatoolkit, then upgrade with pip

\n
conda install tensorflow-gpu=2.1\npip install tensorflow-gpu==2.3.1\n
\n

(2.4.0 uses cuda 11.0 and cudnn 8.0, however cudnn 8.0 is not in anaconda as of 16/12/2020)

\n

Edit: please also see @GZ0's answer, which links to a github discussion with a one-line solution

\n", "system": "" }, { "instruction": "How to install libcusolver.so.11", "input": "", "output": "

If you want a concrete solution, just find libcusolver.so.10 on your machine and create a link to libcusolver.so.11:

\n

Following command solved issue for me:

\n
sudo ln -s /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcusolver.so.10 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcusolver.so.11\n
\n

Credit to: https://github.com/tensorflow/tensorflow/issues/43947

\n", "system": "" }, { "instruction": "Replacing placeholder for tensorflow v2", "input": "", "output": "

Make your code work with TF 2.0

\n

Below is a sample code which you can use with TF 2.0.\nIt relies on the compatibility API\nthat is accessible as tensorflow.compat.v1, and requires to disable v2 behaviors.\nI don't know if it behaves as you expected.\nIf not, then provide us more explanation of what you try to achieve.

\n
import tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\n\n@tf.function\ndef construct_graph(graph_dict, inputs, outputs):\n    queue = inputs[:]\n    make_dict = {}\n    for key, val in graph_dict.items():\n        if key in inputs:\n            make_dict[key] = tf.placeholder(tf.float32, name=key)\n        else:\n            make_dict[key] = None\n    # Breadth-First search of graph starting from inputs\n    while len(queue) != 0:\n        cur = graph_dict[queue[0]]\n        for outg in cur["outgoing"]:\n            if make_dict[outg[0]]: # If discovered node, do add/multiply operation\n                make_dict[outg[0]] = tf.add(make_dict[outg[0]], tf.multiply(outg[1], make_dict[queue[0]]))\n            else: # If undiscovered node, input is just coming in multiplied and add outgoing to queue\n                make_dict[outg[0]] = tf.multiply(make_dict[queue[0]], outg[1])\n                for outgo in graph_dict[outg[0]]["outgoing"]:\n                    queue.append(outgo[0])\n        queue.pop(0)\n    # Returns one data graph for each output\n    return [make_dict[x] for x in outputs]\n\ndef main():\n    graph_def = {\n        "B": {\n            "incoming": [],\n            "outgoing": [("A", 1.0)]\n        },\n        "C": {\n            "incoming": [],\n            "outgoing": [("A", 1.0)]\n        },\n        "A": {\n            "incoming": [("B", 2.0), ("C", -1.0)],\n            "outgoing": [("D", 3.0)]\n        },\n        "D": {\n            "incoming": [("A", 2.0)],\n            "outgoing": []\n        }\n    }\n    outputs = construct_graph(graph_def, ["B", "C"], ["A"])\n    print(outputs)\n\nif __name__ == "__main__":\n    main()\n
\n
[<tf.Tensor 'PartitionedCall:0' shape=<unknown> dtype=float32>]\n
\n

\u00a0Migrate your code to TF 2.0

\n

While the above snippet is valid, it is still tied to TF 1.0.\nTo migrate it to TF 2.0 you have to refactor a little bit your code.

\n

Instead of returning a list of tensors, which were callables with TF 1.0, I advise you to return a list of\nkeras.layers.Model.

\n

Below is a working example:

\n
import tensorflow as tf\n\ndef construct_graph(graph_dict, inputs, outputs):\n    queue = inputs[:]\n    make_dict = {}\n    for key, val in graph_dict.items():\n        if key in inputs:\n            # Use keras.Input instead of placeholders\n            make_dict[key] = tf.keras.Input(name=key, shape=(), dtype=tf.dtypes.float32)\n        else:\n            make_dict[key] = None\n    # Breadth-First search of graph starting from inputs\n    while len(queue) != 0:\n        cur = graph_dict[queue[0]]\n        for outg in cur["outgoing"]:\n            if make_dict[outg[0]] is not None: # If discovered node, do add/multiply operation\n                make_dict[outg[0]] = tf.keras.layers.add([\n                    make_dict[outg[0]],\n                    tf.keras.layers.multiply(\n                        [[outg[1]], make_dict[queue[0]]],\n                    )],\n                )\n            else: # If undiscovered node, input is just coming in multiplied and add outgoing to queue\n                make_dict[outg[0]] = tf.keras.layers.multiply(\n                    [make_dict[queue[0]], [outg[1]]]\n                )\n                for outgo in graph_dict[outg[0]]["outgoing"]:\n                    queue.append(outgo[0])\n        queue.pop(0)\n    # Returns one data graph for each output\n    model_inputs = [make_dict[key] for key in inputs]\n    model_outputs = [make_dict[key] for key in outputs]\n    return [tf.keras.Model(inputs=model_inputs, outputs=o) for o in model_outputs]\n\ndef main():\n    graph_def = {\n        "B": {\n            "incoming": [],\n            "outgoing": [("A", 1.0)]\n        },\n        "C": {\n            "incoming": [],\n            "outgoing": [("A", 1.0)]\n        },\n        "A": {\n            "incoming": [("B", 2.0), ("C", -1.0)],\n            "outgoing": [("D", 3.0)]\n        },\n        "D": {\n            "incoming": [("A", 2.0)],\n            "outgoing": []\n        }\n    }\n    outputs = construct_graph(graph_def, ["B", "C"], ["A"])\n    print("Builded models:", outputs)\n    for o in outputs:\n        o.summary(120)\n        print("Output:", o((1.0, 1.0)))\n\nif __name__ == "__main__":\n    main()\n
\n

What to notice here?

\n\n

Here is the output of the code.

\n
Builded models: [<tensorflow.python.keras.engine.training.Model object at 0x7fa0b49f0f50>]\nModel: "model"\n________________________________________________________________________________________________________________________\nLayer (type)                           Output Shape               Param #       Connected to                            \n========================================================================================================================\nB (InputLayer)                         [(None,)]                  0                                                     \n________________________________________________________________________________________________________________________\nC (InputLayer)                         [(None,)]                  0                                                     \n________________________________________________________________________________________________________________________\ntf_op_layer_mul (TensorFlowOpLayer)    [(None,)]                  0             B[0][0]                                 \n________________________________________________________________________________________________________________________\ntf_op_layer_mul_1 (TensorFlowOpLayer)  [(None,)]                  0             C[0][0]                                 \n________________________________________________________________________________________________________________________\nadd (Add)                              (None,)                    0             tf_op_layer_mul[0][0]                   \n                                                                                tf_op_layer_mul_1[0][0]                 \n========================================================================================================================\nTotal params: 0\nTrainable params: 0\nNon-trainable params: 0\n________________________________________________________________________________________________________________________\nOutput: tf.Tensor([2.], shape=(1,), dtype=float32)\n
\n", "system": "" }, { "instruction": "Force Anaconda to install tensorflow 1.14", "input": "", "output": "

You can force installing a certain version of any package found on Anaconda using simply an = operator with the package version attached to it.

\n\n

So, if you want to install tensorflow 1.14, you can run the following command:

\n\n
conda install -c conda-forge tensorflow=1.14\n
\n\n

You can replace 1.14 with any other versions. To see the available versions of tensorflow on Anaconda, you can run:

\n\n
conda search tensorflow\n
\n", "system": "" }, { "instruction": "ValueError: Shape mismatch: The shape of labels (received (15,)) should equal the shape of logits except for the last dimension (received (5, 3))", "input": "", "output": "

The difference between sparse_categorical_crossentropy and categorical_crossentropy is whether your targets are one-hot encoded.

\n\n

The shape of label batch is (5,3), which means it has been one-hot encoded. So you should use categorical_crossentropy loss function.

\n\n
model.compile(optimizer='adam',\n              loss='categorical_crossentropy',\n              metrics=['accuracy'])\n
\n", "system": "" }, { "instruction": "Why do some object detection neural networks return all zeros in OpenCV 4.1.0?", "input": "", "output": "

Some models expect normalized values for channel intensity. Normally, an image is represented in uint8 pixels (values ranging from 0 ~ 255). You would need to convert it to float32 (from -1 ~ 1). Basically, for such a model, your image would be interpreted as a blank picture (mostly all white pixels).

\n

Here's a python function that could be used to normalize the image:

\n
def processFrame(image):\n    img = cv2.resize(image, (input_width, input_height)) # input sizes of detector \n    img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n    # Normalize pixel values if using a floating model\n    img_rgb = (np.float32(img_rgb) - 127.5) / 127.5\n
\n", "system": "" }, { "instruction": "ValueError: Output tensors to a Model must be the output of a TensorFlow `Layer`", "input": "", "output": "

I have found a way to work around to solve the problem.\nFor anyone who encounters the same issue, you can use the Lambda layer to wrap your tensorflow operations, this is what I did:

\n\n
from tensorflow.python.keras.layers import Lambda;\n\ndef norm(fc2):\n\n    fc2_norm = K.l2_normalize(fc2, axis = 3);\n    illum_est = tf.reduce_sum(fc2_norm, axis = (1, 2));\n    illum_est = K.l2_normalize(illum_est);\n\n    return illum_est;\n\nillum_est = Lambda(norm)(fc2);\n
\n", "system": "" }, { "instruction": "How to add report_tensor_allocations_upon_oom to RunOptions in Keras", "input": "", "output": "

TF1 solution:

\n

Its not as hard as it seems, what you need to know is that according to the documentation, the **kwargs parameter passed to model.compile will be passed to session.run

\n

So you can do something like:

\n
import tensorflow as tf\nrun_opts = tf.RunOptions(report_tensor_allocations_upon_oom = True)\n\nmodel.compile(loss = "...", optimizer = "...", metrics = "..", options = run_opts)\n
\n

And it should be passed directly each time session.run is called.

\n

TF2:

\n

The solution above works only for tf1. For tf2, unfortunately, it appears there is no easy solution yet.

\n", "system": "" }, { "instruction": "Darknet YOLO image size", "input": "", "output": "

You don't have to resize it, because Darknet will do it instead of you!

\n\n

It means you really don't need to do that and you can use different image sizes during your training. What you posted above is just network configuration. There should be full network definition as well. And the height and the width tell you what's the network resolution. And it also keeps aspect ratio, check e.g this.

\n", "system": "" }, { "instruction": "Keras early stopping callback error, val_loss metric not available", "input": "", "output": "

If the error only occurs when you use smaller datasets, you're very likely using datasets small enough to not have a single sample in the validation set.

\n\n

Thus it cannot calculate a validation loss.

\n", "system": "" }, { "instruction": "Feeding .npy (numpy files) into tensorflow data pipeline", "input": "", "output": "

It is actually possible to read directly NPY files with TensorFlow instead of TFRecords. The key pieces are tf.data.FixedLengthRecordDataset and tf.io.decode_raw, along with a look at the documentation of the NPY format. For simplicity, let's suppose that a float32 NPY file containing an array with shape (N, K) is given, and you know the number of features K beforehand, as well as the fact that it is a float32 array. An NPY file is just a binary file with a small header and followed by the raw array data (object arrays are different, but we're considering numbers now). In short, you can find the size of this header with a function like this:

\n
def npy_header_offset(npy_path):\n    with open(str(npy_path), 'rb') as f:\n        if f.read(6) != b'\\x93NUMPY':\n            raise ValueError('Invalid NPY file.')\n        version_major, version_minor = f.read(2)\n        if version_major == 1:\n            header_len_size = 2\n        elif version_major == 2:\n            header_len_size = 4\n        else:\n            raise ValueError('Unknown NPY file version {}.{}.'.format(version_major, version_minor))\n        header_len = sum(b << (8 * i) for i, b in enumerate(f.read(header_len_size)))\n        header = f.read(header_len)\n        if not header.endswith(b'\\n'):\n            raise ValueError('Invalid NPY file.')\n        return f.tell()\n
\n

With this you can create a dataset like this:

\n
import tensorflow as tf\n\nnpy_file = 'my_file.npy'\nnum_features = ...\ndtype = tf.float32\nheader_offset = npy_header_offset(npy_file)\ndataset = tf.data.FixedLengthRecordDataset([npy_file], num_features * dtype.size, header_bytes=header_offset)\n
\n

Each element of this dataset contains a long string of bytes representing a single example. You can now decode it to obtain an actual array:

\n
dataset = dataset.map(lambda s: tf.io.decode_raw(s, dtype))\n
\n

The elements will have indeterminate shape, though, because TensorFlow does not keep track of the length of the strings. You can just enforce the shape since you know the number of features:

\n
dataset = dataset.map(lambda s: tf.reshape(tf.io.decode_raw(s, dtype), (num_features,)))\n
\n

Similarly, you can choose to perform this step after batching, or combine it in whatever way you feel like.

\n

The limitation is that you had to know the number of features in advance. It is possible to extract it from the NumPy header, though, just a bit of a pain, and in any case very hardly from within TensorFlow, so the file names would need to be known in advance. Another limitation is that, as it is, the solution requires you to either use only one file per dataset or files that have the same header size, although if you know that all the arrays have the same size that should actually be the case.

\n

Admittedly, if one considers this kind of approach it may just be better to have a pure binary file without headers, and either hard code the number of features or read them from a different source...

\n", "system": "" }, { "instruction": "Tensor is not an element of this graph", "input": "", "output": "

Try first:

\n\n
import tensorflow as tf\ngraph = tf.get_default_graph()\n
\n\n

Then, when you need to use predict:

\n\n
with graph.as_default():\n     y = model.predict(X)\n
\n", "system": "" }, { "instruction": "difference between Tensorflow's Graph and GraphDef", "input": "", "output": "

Graph or Computional Graph is the core concept of tensorflow to present computation. When you use tensorflow, you firstly create you own Computation Graph and pass the Graph to tensorflow. How to do that? As you may know, tensorflow support many front-end programming languages, like Python, C++, Java and Go and the core language is C++; how do the other languages transform the Graph to C++? They use a tool called protobuf which can generate specific language stubs, that's where the GraphDef come from. It's a serialized version of Graph.

\n\n
\n

which one should I have to run a graph loaded from protobuf file (.pb)

\n
\n\n

You should read your *pb file using GraphDef and bind the GraphDef to a (default) Graph, then use a session to run the Graph for computation, like the following code:

\n\n
import tensorflow as tf\nfrom tensorflow.python.platform import gfile\nwith tf.Session() as sess:\n    model_filename ='PATH_TO_PB.pb'\n    with gfile.FastGFile(model_filename, 'rb') as f:\n        graph_def = tf.GraphDef()\n        graph_def.ParseFromString(f.read())\n        g_in = tf.import_graph_def(graph_def)\nLOGDIR='/logs/tests/1/'\ntrain_writer = tf.summary.FileWriter(LOGDIR)\ntrain_writer.add_graph(sess.graph)\n
\n", "system": "" }, { "instruction": "how to implement early stopping in tensorflow", "input": "", "output": "

Here is my implementation of the early stopping u can adapt it:

\n\n

The early stopping can be applied at certain stages of the training process, such as at the end of each epoch. Specifically; in my case; I monitor the test (validation) loss at each epoch and after the test loss has not improved after 20 epochs (self.require_improvement= 20) , the training is interrupted.

\n\n

You can set the max epochs to 10000 or 20000 or whatever you want (self.max_epochs = 10000).

\n\n
  self.require_improvement= 20\n  self.max_epochs = 10000\n
\n\n

Here is my training function where I use the early stopping:

\n\n

def train(self):

\n\n
# training data\n    train_input = self.Normalize(self.x_train)\n    train_output = self.y_train.copy()            \n#===============\n    save_sess=self.sess # this used to compare the result of previous sess with actual one\n# ===============\n  #costs history :\n    costs = []\n    costs_inter=[]\n# =================\n  #for early stopping :\n    best_cost=1000000 \n    stop = False\n    last_improvement=0\n# ================\n    n_samples = train_input.shape[0] # size of the training set\n# ===============\n   #train the mini_batches model using the early stopping criteria\n    epoch = 0\n    while epoch < self.max_epochs and stop == False:\n        #train the model on the traning set by mini batches\n        #suffle then split the training set to mini-batches of size self.batch_size\n        seq =list(range(n_samples))\n        random.shuffle(seq)\n        mini_batches = [\n            seq[k:k+self.batch_size]\n            for k in range(0,n_samples, self.batch_size)\n        ]\n\n        avg_cost = 0. # The average cost of mini_batches\n        step= 0\n\n        for sample in mini_batches:\n\n            batch_x = x_train.iloc[sample, :]\n            batch_y =train_output.iloc[sample, :]\n            batch_y = np.array(batch_y).flatten()\n\n            feed_dict={self.X: batch_x,self.Y:batch_y, self.is_train:True}\n\n            _, cost,acc=self.sess.run([self.train_step, self.loss_, self.accuracy_],feed_dict=feed_dict)\n            avg_cost += cost *len(sample)/n_samples \n            print('epoch[{}] step [{}] train -- loss : {}, accuracy : {}'.format(epoch,step, avg_cost, acc))\n            step += 100\n\n        #cost history since the last best cost\n        costs_inter.append(avg_cost)\n\n        #early stopping based on the validation set/ max_steps_without_decrease of the loss value : require_improvement\n        if avg_cost < best_cost:\n            save_sess= self.sess # save session\n            best_cost = avg_cost\n            costs +=costs_inter # costs history of the validatio set\n            last_improvement = 0\n            costs_inter= []\n        else:\n            last_improvement +=1\n        if last_improvement > self.require_improvement:\n            print(\"No improvement found during the ( self.require_improvement) last iterations, stopping optimization.\")\n            # Break out from the loop.\n            stop = True\n            self.sess=save_sess # restore session with the best cost\n\n        ## Run validation after every epoch : \n        print('---------------------------------------------------------')\n        self.y_validation = np.array(self.y_validation).flatten()\n        loss_valid, acc_valid = self.sess.run([self.loss_,self.accuracy_], \n                                              feed_dict={self.X: self.x_validation, self.Y: self.y_validation,self.is_train: True})\n        print(\"Epoch: {0}, validation loss: {1:.2f}, validation accuracy: {2:.01%}\".format(epoch + 1, loss_valid, acc_valid))\n        print('---------------------------------------------------------')\n\n        epoch +=1\n
\n\n

We can resume the important code here :

\n\n
def train(self):\n  ...\n      #costs history :\n        costs = []\n        costs_inter=[]\n      #for early stopping :\n        best_cost=1000000 \n        stop = False\n        last_improvement=0\n       #train the mini_batches model using the early stopping criteria\n        epoch = 0\n        while epoch < self.max_epochs and stop == False:\n            ...\n            for sample in mini_batches:\n            ...                   \n            #cost history since the last best cost\n            costs_inter.append(avg_cost)\n\n            #early stopping based on the validation set/ max_steps_without_decrease of the loss value : require_improvement\n            if avg_cost < best_cost:\n                save_sess= self.sess # save session\n                best_cost = avg_cost\n                costs +=costs_inter # costs history of the validatio set\n                last_improvement = 0\n                costs_inter= []\n            else:\n                last_improvement +=1\n            if last_improvement > self.require_improvement:\n                print(\"No improvement found during the ( self.require_improvement) last iterations, stopping optimization.\")\n                # Break out from the loop.\n                stop = True\n                self.sess=save_sess # restore session with the best cost\n            ...\n            epoch +=1\n
\n\n

Hope it will help someone :).

\n", "system": "" }, { "instruction": "How to properly use tf.metrics.accuracy?", "input": "", "output": "

TL;DR

\n\n

The accuracy function tf.metrics.accuracy calculates how often predictions matches labels based on two local variables it creates: total and count, that are used to compute the frequency with which logits matches labels.

\n\n
acc, acc_op = tf.metrics.accuracy(labels=tf.argmax(labels, 1), \n                                  predictions=tf.argmax(logits,1))\n\nprint(sess.run([acc, acc_op]))\nprint(sess.run([acc]))\n# Output\n#[0.0, 0.66666669]\n#[0.66666669]\n
\n\n\n\n

To understand why the acc returns 0.0, go through the details below.

\n\n
\n\n

Details using a simple example:

\n\n
logits = tf.placeholder(tf.int64, [2,3])\nlabels = tf.Variable([[0, 1, 0], [1, 0, 1]])\n\nacc, acc_op = tf.metrics.accuracy(labels=tf.argmax(labels, 1),   \n                                  predictions=tf.argmax(logits,1))\n
\n\n

Initialize the variables:

\n\n

Since metrics.accuracy creates two local variables total and count, we need to call local_variables_initializer() to initialize them.

\n\n
sess = tf.Session()\n\nsess.run(tf.local_variables_initializer())\nsess.run(tf.global_variables_initializer())\n\nstream_vars = [i for i in tf.local_variables()]\nprint(stream_vars)\n\n#[<tf.Variable 'accuracy/total:0' shape=() dtype=float32_ref>,\n# <tf.Variable 'accuracy/count:0' shape=() dtype=float32_ref>]\n
\n\n

Understanding update ops and accuracy calculation:

\n\n
print('acc:',sess.run(acc, {logits:[[0,1,0],[1,0,1]]}))\n#acc: 0.0\n\nprint('[total, count]:',sess.run(stream_vars)) \n#[total, count]: [0.0, 0.0]\n
\n\n

The above returns 0.0 for accuracy as total and count are zeros, inspite of giving matching inputs.

\n\n
print('ops:', sess.run(acc_op, {logits:[[0,1,0],[1,0,1]]})) \n#ops: 1.0\n\nprint('[total, count]:',sess.run(stream_vars)) \n#[total, count]: [2.0, 2.0]\n
\n\n

With the new inputs, the accuracy is calculated when the update op is called. Note: since all the logits and labels match, we get accuracy of 1.0 and the local variables total and count actually give total correctly predicted and the total comparisons made.

\n\n

Now we call accuracy with the new inputs (not the update ops):

\n\n
print('acc:', sess.run(acc,{logits:[[1,0,0],[0,1,0]]}))\n#acc: 1.0\n
\n\n

Accuracy call doesnt update the metrics with the new inputs, it just returns the value using the two local variables. Note: the logits and labels dont match in this case. Now calling update ops again:

\n\n
print('op:',sess.run(acc_op,{logits:[[0,1,0],[0,1,0]]}))\n#op: 0.75 \nprint('[total, count]:',sess.run(stream_vars)) \n#[total, count]: [3.0, 4.0]\n
\n\n

The metrics are updated to new inputs

\n\n
\n\n

For more information on how to use the metrics during training and how to reset them during validation, can be found here.

\n", "system": "" }, { "instruction": "What is the advantage of using an InputLayer (or an Input) in a Keras model with Tensorflow tensors?", "input": "", "output": "

It would seem that InputLayer has some uses:

\n\n", "system": "" }, { "instruction": "tensorboard: command not found", "input": "", "output": "

You could call tensorboard as a python module like this:

\n\n
python3 -m tensorboard.main --logdir=~/my/training/dir\n
\n\n

or add this to your .profile

\n\n

alias tensorboard='python3 -m tensorboard.main'

\n", "system": "" }, { "instruction": "Tensorflow: How does tf.get_variable work?", "input": "", "output": "

tf.get_variable(name) creates a new variable called name (or add _ if name already exists in the current scope) in the tensorflow graph.

\n

In your example, you're creating a python variable called var1.

\n

The name of that variable in the tensorflow graph is not ** var1, but is Variable:0.

\n

Every node you define has its own name that you can specify or let tensorflow give a default (and always different) one. You can see the name value accessing the name property of the python variable. (ie print(var1.name)).

\n

On your second line, you're defining a Python variable var2 whose name in the tensorflow graph is var1.

\n

The script

\n
import tensorflow as tf\n\nvar1 = tf.Variable(3.,dtype=tf.float64)\nprint(var1.name)\nvar2 = tf.get_variable("var1",[],dtype=tf.float64)\nprint(var2.name)\n
\n

In fact prints:

\n
Variable:0\nvar1:0\n
\n

If you, instead, want to define a variable (node) called var1 in the tensorflow graph and then getting a reference to that node, you cannot simply use tf.get_variable("var1"), because it will create a new different variable valled var1_1.

\n

This script

\n
var1 = tf.Variable(3.,dtype=tf.float64, name="var1")\nprint(var1.name)\nvar2 = tf.get_variable("var1",[],dtype=tf.float64)\nprint(var2.name)\n
\n

prints:

\n
var1:0\nvar1_1:0\n
\n

If you want to create a reference to the node var1, you first:

\n
    \n
  1. Have to replace tf.Variable with tf.get_variable. The variables created with tf.Variable can't be shared, while the latter can.

    \n
  2. \n
  3. Know what the scope of the var1 is and allow the reuse of that scope when declaring the reference.

    \n
  4. \n
\n

Looking at the code is the better way for understanding

\n
import tensorflow as tf\n\n#var1 = tf.Variable(3.,dtype=tf.float64, name="var1")\nvar1 = tf.get_variable(initializer=tf.constant_initializer(3.), dtype=tf.float64, name="var1", shape=())\ncurrent_scope = tf.contrib.framework.get_name_scope()\nprint(var1.name)\nwith tf.variable_scope(current_scope, reuse=True):\n    var2 = tf.get_variable("var1",[],dtype=tf.float64)\n    print(var2.name)\n
\n

outputs:

\n
var1:0\nvar1:0\n
\n", "system": "" }, { "instruction": "How does one train multiple models in a single script in TensorFlow when there are GPUs present?", "input": "", "output": "

I think that running all models in one single script can be bad practice in the long term (see my suggestion below for a better alternative). However, if you would like to do it, here is a solution: You can encapsulate your TF session into a process with the multiprocessing module, this will make sure TF releases the session memory once the process is done. Here is a code snippet:

\n\n
from multiprocessing import Pool\nimport contextlib\ndef my_model((param1, param2, param3)): # Note the extra (), required by the pool syntax\n    < your code >\n\nnum_pool_worker=1 # can be bigger than 1, to enable parallel execution \nwith contextlib.closing(Pool(num_pool_workers)) as po: # This ensures that the processes get closed once they are done\n     pool_results = po.map_async(my_model,\n                                    ((param1, param2, param3)\n                                     for param1, param2, param3 in params_list))\n     results_list = pool_results.get()\n
\n\n

Note from OP: The random number generator seed does not reset automatically with the multi-processing library if you choose to use it. Details here: Using python multiprocessing with different random seed for each process

\n\n

About TF resource allocation: Usually TF allocates much more resources than it needs. Many times you can restrict each process to use a fraction of the total GPU memory, and discover through trial and error the fraction your script requires.

\n\n

You can do it with the following snippet

\n\n
gpu_memory_fraction = 0.3 # Choose this number through trial and error\ngpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_memory_fraction,)\nsession_config = tf.ConfigProto(gpu_options=gpu_options)\nsess = tf.Session(config=session_config, graph=graph)\n
\n\n

Note that sometimes TF increases the memory usage in order to accelerate the execution. Therefore, reducing the memory usage might make your model run slower.

\n\n

Answers to the new questions in your edit/comments:

\n\n
    \n
  1. Yes, Tensorflow will be re-allocated every time a new process is created, and cleared once a process ends.

  2. \n
  3. The for-loop in your edit should also do the job. I suggest to use Pool instead, because it will enable you to run several models concurrently on a single GPU. See my notes about setting gpu_memory_fraction and \"choosing the maximal number of processes\". Also note that: (1) The Pool map runs the loop for you, so you don't need an outer for-loop once you use it. (2) In your example, you should have something like mdl=get_model(args) before calling train()

  4. \n
  5. Weird tuple parenthesis: Pool only accepts a single argument, therefore we use a tuple to pass multiple arguments. See multiprocessing.pool.map and function with two arguments for more details. As suggested in one answer, you can make it more readable with

    \n\n
    def train_mdl(params):\n    (x,y)=params\n    < your code >\n
  6. \n
  7. As @Seven suggested, you can use CUDA_VISIBLE_DEVICES environment variable to choose which GPU to use for your process. You can do it from within your python script using the following on the beginning of the process function (train_mdl).

    \n\n
    import os # the import can be on the top of the python script\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"{}\".format(gpu_id)\n
  8. \n
\n\n

A better practice for executing your experiments would be to isolate your training/evaluation code from the hyper parameters/ model search code.\nE.g. have a script named train.py, which accepts a specific combination of hyper parameters and references to your data as arguments, and executes training for a single model.

\n\n

Then, to iterate through the all the possible combinations of parameters you can use a simple task (jobs) queue, and submit all the possible combinations of hyper-parametrs as separate jobs. The task queue will feed your jobs one at a time to your machine. Usually, you can also set the queue to execute number of processes concurrently (see details below).

\n\n

Specifically, I use task spooler, which is super easy to install and handful (doesn't requires admin privileges, details below).

\n\n

Basic usage is (see notes below about task spooler usage):

\n\n
ts <your-command>\n
\n\n

In practice, I have a separate python script that manages my experiments, set all the arguments per specific experiment and send the jobs to the ts queue.

\n\n

Here are some relevant snippets of python code from my experiments manager:

\n\n

run_bash executes a bash command

\n\n
def run_bash(cmd):\n    p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, executable='/bin/bash')\n    out = p.stdout.read().strip()\n    return out  # This is the stdout from the shell command\n
\n\n

The next snippet sets the number of concurrent processes to be run (see note below about choosing the maximal number of processes):

\n\n
max_job_num_per_gpu = 2\nrun_bash('ts -S %d'%max_job_num_per_gpu)\n
\n\n

The next snippet iterates through a list of all combinations of hyper params / model params. Each element of the list is a dictionary, where the keys are the command line arguments for the train.py script

\n\n
for combination_dict in combinations_list:\n\n    job_cmd = 'python train.py ' + '  '.join(\n            ['--{}={}'.format(flag, value) for flag, value in combination_dict.iteritems()])\n\n    submit_cmd = \"ts bash -c '%s'\" % job_cmd\n    run_bash(submit_cmd)\n
\n\n

A note about about choosing the maximal number of processes:

\n\n

If you are short on GPUs, you can use gpu_memory_fraction you found, to set the number of processes as max_job_num_per_gpu=int(1/gpu_memory_fraction)

\n\n

Notes about task spooler (ts):

\n\n
    \n
  1. You could set the number of concurrent processes to run (\"slots\") with:

    \n\n

    ts -S <number-of-slots>

  2. \n
  3. Installing ts doesn't requires admin privileges. You can download and compile it from source with a simple make, add it to your path and you're done.

  4. \n
  5. You can set up multiple queues (I use it for multiple GPUs), with

    \n\n

    TS_SOCKET=<path_to_queue_name> ts <your-command>

    \n\n

    e.g.

    \n\n

    TS_SOCKET=/tmp/socket-ts.gpu_queue_1 ts <your-command>

    \n\n

    TS_SOCKET=/tmp/socket-ts.gpu_queue_2 ts <your-command>

  6. \n
  7. See here for further usage example

  8. \n
\n\n

A note about automatically setting the path names and file names:\nOnce you separate your main code from the experiment manager, you will need an efficient way to generate file names and directory names, given the hyper-params. I usually keep my important hyper params in a dictionary and use the following function to generate a single chained string from the dictionary key-value pairs.\nHere are the functions I use for doing it:

\n\n
def build_string_from_dict(d, sep='%'):\n    \"\"\"\n     Builds a string from a dictionary.\n     Mainly used for formatting hyper-params to file names.\n     Key-value pairs are sorted by the key name.\n\n    Args:\n        d: dictionary\n\n    Returns: string\n    :param d: input dictionary\n    :param sep: key-value separator\n\n    \"\"\"\n\n    return sep.join(['{}={}'.format(k, _value2str(d[k])) for k in sorted(d.keys())])\n\n\ndef _value2str(val):\n    if isinstance(val, float): \n        # %g means: \"Floating point format.\n        # Uses lowercase exponential format if exponent is less than -4 or not less than precision,\n        # decimal format otherwise.\"\n        val = '%g' % val\n    else:\n        val = '{}'.format(val)\n    val = re.sub('\\.', '_', val)\n    return val\n
\n", "system": "" }, { "instruction": "Converting from Pandas dataframe to TensorFlow tensor object", "input": "", "output": "

Here is one solution I found that works on Google Colab:

\n
import pandas as pd\nimport tensorflow as tf\n#Read the file to a pandas object\ndata=pd.read_csv('filedir')\n#convert the pandas object to a tensor\ndata=tf.convert_to_tensor(data)\ntype(data)\n
\n

This will print something like:

\n
tensorflow.python.framework.ops.Tensor\n
\n", "system": "" }, { "instruction": "Hyperparameter optimization for Deep Learning Structures using Bayesian Optimization", "input": "", "output": "
\n

Although I am still not fully understanding the optimization\n algorithm, I feed like it will help me greatly.

\n
\n\n

First up, let me briefly explain this part.\nBayesian Optimization methods aim to deal with exploration-exploitation trade off in the multi-armed bandit problem. In this problem, there is an unknown function, which we can evaluate in any point, but each evaluation costs (direct penalty or opportunity cost), and the goal is to find its maximum using as few trials as possible. Basically, the trade off is this: you know the function in a finite set of points (of which some are good and some are bad), so you can try an area around the current local maximum, hoping to improve it (exploitation), or you can try a completely new area of space, that can potentially be much better or much worse (exploration), or somewhere in between.

\n\n

Bayesian Optimization methods (e.g. PI, EI, UCB), build a model of the target function using a Gaussian Process (GP) and at each step choose the most \"promising\" point based on their GP model (note that \"promising\" can be defined differently by different particular methods).

\n\n

Here's an example:

\n\n

\"sin(x)*x\"

\n\n

The true function is f(x) = x * sin(x) (black curve) on [-10, 10] interval. Red dots represent each trial, red curve is the GP mean, blue curve is the mean plus or minus one standard deviation. \nAs you can see, the GP model doesn't match the true function everywhere, but the optimizer fairly quickly identified the \"hot\" area around -8 and started to exploit it.

\n\n
\n

How do I set up the Bayesian Optimization with regards to a deep\n network?

\n
\n\n

In this case, the space is defined by (possibly transformed) hyperparameters, usually a multidimensional unit hypercube.

\n\n

For example, suppose you have three hyperparameters: a learning rate \u03b1 in [0.001, 0.01], the regularizer \u03bb in [0.1, 1] (both continuous) and the hidden layer size N in [50..100] (integer). The space for optimization is a 3-dimensional cube [0, 1]*[0, 1]*[0, 1]. Each point (p0, p1, p2) in this cube corresponds to a trinity (\u03b1, \u03bb, N) by the following transformation:

\n\n
p0 -> \u03b1 = 10**(p0-3)\np1 -> \u03bb = 10**(p1-1)\np2 -> N = int(p2*50 + 50)\n
\n\n
\n

What is the function I am trying to optimize? Is it the cost of the\n validation set after N epochs?

\n
\n\n

Correct, the target function is neural network validation accuracy. Clearly, each evaluation is expensive, because it requires at least several epochs for training.

\n\n

Also note that the target function is stochastic, i.e. two evaluations on the same point may slightly differ, but it's not a blocker for Bayesian Optimization, though it obviously increases the uncertainty.

\n\n
\n

Is spearmint a good starting point for this task? Any other\n suggestions for this task?

\n
\n\n

spearmint is a good library, you can definitely work with that. I can also recommend hyperopt.

\n\n

In my own research, I ended up writing my own tiny library, basically for two reasons: I wanted to code exact Bayesian method to use (in particular, I found a portfolio strategy of UCB and PI converged faster than anything else, in my case); plus there is another technique that can save up to 50% of training time called learning curve prediction (the idea is to skip full learning cycle when the optimizer is confident the model doesn't learn as fast as in other areas). I'm not aware of any library that implements this, so I coded it myself, and in the end it paid off. If you're interested, the code is on GitHub.

\n", "system": "" }, { "instruction": "Tensorflow Confusion Matrix in TensorBoard", "input": "", "output": "

Here is something I have put together That works reasonably well. Still need to adjust a few things like the tick placements etc.

\n\n

\"Confusion

\n\n

Here is the function that will pretty much do everything for you.

\n\n
from textwrap import wrap\nimport re\nimport itertools\nimport tfplot\nimport matplotlib\nimport numpy as np\nfrom sklearn.metrics import confusion_matrix\n\n\n\ndef plot_confusion_matrix(correct_labels, predict_labels, labels, title='Confusion matrix', tensor_name = 'MyFigure/image', normalize=False):\n''' \nParameters:\n    correct_labels                  : These are your true classification categories.\n    predict_labels                  : These are you predicted classification categories\n    labels                          : This is a lit of labels which will be used to display the axix labels\n    title='Confusion matrix'        : Title for your matrix\n    tensor_name = 'MyFigure/image'  : Name for the output summay tensor\n\nReturns:\n    summary: TensorFlow summary \n\nOther itema to note:\n    - Depending on the number of category and the data , you may have to modify the figzie, font sizes etc. \n    - Currently, some of the ticks dont line up due to rotations.\n'''\ncm = confusion_matrix(correct_labels, predict_labels, labels=labels)\nif normalize:\n    cm = cm.astype('float')*10 / cm.sum(axis=1)[:, np.newaxis]\n    cm = np.nan_to_num(cm, copy=True)\n    cm = cm.astype('int')\n\nnp.set_printoptions(precision=2)\n###fig, ax = matplotlib.figure.Figure()\n\nfig = matplotlib.figure.Figure(figsize=(7, 7), dpi=320, facecolor='w', edgecolor='k')\nax = fig.add_subplot(1, 1, 1)\nim = ax.imshow(cm, cmap='Oranges')\n\nclasses = [re.sub(r'([a-z](?=[A-Z])|[A-Z](?=[A-Z][a-z]))', r'\\1 ', x) for x in labels]\nclasses = ['\\n'.join(wrap(l, 40)) for l in classes]\n\ntick_marks = np.arange(len(classes))\n\nax.set_xlabel('Predicted', fontsize=7)\nax.set_xticks(tick_marks)\nc = ax.set_xticklabels(classes, fontsize=4, rotation=-90,  ha='center')\nax.xaxis.set_label_position('bottom')\nax.xaxis.tick_bottom()\n\nax.set_ylabel('True Label', fontsize=7)\nax.set_yticks(tick_marks)\nax.set_yticklabels(classes, fontsize=4, va ='center')\nax.yaxis.set_label_position('left')\nax.yaxis.tick_left()\n\nfor i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n    ax.text(j, i, format(cm[i, j], 'd') if cm[i,j]!=0 else '.', horizontalalignment=\"center\", fontsize=6, verticalalignment='center', color= \"black\")\nfig.set_tight_layout(True)\nsummary = tfplot.figure.to_summary(fig, tag=tensor_name)\nreturn summary\n
\n\n#\n\n

And here is the rest of the code that you will need to call this functions.

\n\n
''' confusion matrix summaries '''\nimg_d_summary_dir = os.path.join(checkpoint_dir, \"summaries\", \"img\")\nimg_d_summary_writer = tf.summary.FileWriter(img_d_summary_dir, sess.graph)\nimg_d_summary = plot_confusion_matrix(correct_labels, predict_labels, labels, tensor_name='dev/cm')\nimg_d_summary_writer.add_summary(img_d_summary, current_step)\n
\n\n

Confuse away!!!

\n", "system": "" }, { "instruction": "Tensorflow mean squared error loss function", "input": "", "output": "

I would say that the third equation is different, while the 1st and 2nd are formally the same but behave differently due to numerical concerns.

\n\n

I think that the 3rd equation (using l2_loss) is just returning 1/2 of the squared Euclidean norm, that is, the sum of the element-wise square of the input, which is x=prediction-Y. You are not dividing by the number of samples anywhere. Thus, if you have a very large number of samples, the computation may overflow (returning Inf).

\n\n

The other two are formally the same, computing the mean of the element-wise squared x tensor. However, while the documentation does not specify it explicitly, it is very likely that reduce_mean uses an algorithm adapted to avoid overflowing with very large number of samples. In other words, it likely does not try to sum everything first and then divide by N, but use some kind of rolling mean that can adapt to an arbitrary number of samples without necessarily causing an overflow.

\n", "system": "" }, { "instruction": "Dot product of two vectors in tensorflow", "input": "", "output": "

One of the easiest way to calculate dot product between two tensors (vector is 1D tensor) is using tf.tensordot

\n\n
a = tf.placeholder(tf.float32, shape=(5))\nb = tf.placeholder(tf.float32, shape=(5))\n\ndot_a_b = tf.tensordot(a, b, 1)\n\nwith tf.Session() as sess:\n    print(dot_a_b.eval(feed_dict={a: [1, 2, 3, 4, 5], b: [6, 7, 8, 9, 10]}))\n# results: 130.0\n
\n", "system": "" }, { "instruction": "Show training and validation accuracy in TensorFlow using same graph", "input": "", "output": "

You can reuse the the accuracy node but you need to use two different SummaryWriters, one for the training runs and one for the test data. Also you have to assign the scalar summary for accuracy to a variable.

\n\n
accuracy_summary = tf.scalar_summary(\"Training Accuracy\", accuracy)\ntf.scalar_summary(\"SomethingElse\", foo)\nsummary_op = tf.merge_all_summaries()\nsummaries_dir = '/me/mydir/'\ntrain_writer = tf.train.SummaryWriter(summaries_dir + '/train', sess.graph)\ntest_writer = tf.train.SummaryWriter(summaries_dir + '/test')\n
\n\n

Then in your training loop you have the normal training and record your summaries with the train_writer. In addition you run the graph on the test set each 100th iteration and record only the accuracy summary with the test_writer.

\n\n
# Record train set summaries, and train\nsummary, _ = sess.run([summary_op, train_step], feed_dict=...)\ntrain_writer.add_summary(summary, n)\nif n % 100 == 0:  # Record summaries and test-set accuracy\n  summary, acc = sess.run([accuracy_summary, accuracy], feed_dict=...)\n  test_writer.add_summary(summary, n)\n  print('Accuracy at step %s: %s' % (n, acc))\n
\n\n

You can then point TensorBoard to the parent directory (summaries_dir) and it will load both data sets.

\n\n

This can be also found in the TensorFlow HowTo's https://www.tensorflow.org/versions/r0.11/how_tos/summaries_and_tensorboard/index.html

\n", "system": "" }, { "instruction": "Cannot import keras after installation", "input": "", "output": "

Diagnose

\n\n

If you have pip installed (you should have it until you use Python 3.5), list the installed Python packages, like this:

\n\n
$ pip list | grep -i keras\nKeras (1.1.0)\n
\n\n

If you don\u2019t see Keras, it means that the previous installation failed or is incomplete (this lib has this dependancies: numpy (1.11.2), PyYAML (3.12), scipy (0.18.1), six (1.10.0), and Theano (0.8.2).)

\n\n

Consult the pip.log to see what\u2019s wrong.

\n\n

You can also display your Python path like this:

\n\n
$ python3 -c 'import sys, pprint; pprint.pprint(sys.path)'\n['',\n '/Library/Frameworks/Python.framework/Versions/3.5/lib/python35.zip',\n '/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5',\n '/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/plat-darwin',\n '/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/lib-dynload',\n '/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages']\n
\n\n

Make sure the Keras library appears in the /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages path (the path is different on Ubuntu).

\n\n

If not, try do uninstall it, and retry installation:

\n\n
$ pip uninstall Keras\n
\n\n

Use a virtualenv

\n\n

It\u2019s a bad idea to use and pollute your system-wide Python. I recommend using a virtualenv (see this guide).

\n\n

The best usage is to create a virtualenv directory (in your home, for instance), and store your virtualenvs in:

\n\n
cd virtualenv/\nvirtualenv -p python3.5 py-keras\nsource py-keras/bin/activate\npip install -q -U pip setuptools wheel\n
\n\n

Then install Keras:

\n\n
pip install keras\n
\n\n

You get:

\n\n
$ pip list\nKeras (1.1.0)\nnumpy (1.11.2)\npip (8.1.2)\nPyYAML (3.12)\nscipy (0.18.1)\nsetuptools (28.3.0)\nsix (1.10.0)\nTheano (0.8.2)\nwheel (0.30.0a0)\n
\n\n

But, you also need to install extra libraries, like Tensorflow:

\n\n
$ python -c \"import keras\"\nUsing TensorFlow backend.\nTraceback (most recent call last):\n  ...\nImportError: No module named 'tensorflow'\n
\n\n

The installation guide of TesnsorFlow is here: https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#pip-installation

\n", "system": "" }, { "instruction": "How to perform k-fold cross validation with tensorflow?", "input": "", "output": "

I know this question is old but in case someone is looking to do something similar, expanding on ahmedhosny's answer:

\n\n

The new tensorflow datasets API has the ability to create dataset objects using python generators, so along with scikit-learn's KFold one option can be to create a dataset from the KFold.split() generator:

\n\n
import numpy as np\n\nfrom sklearn.model_selection import LeaveOneOut,KFold\n\nimport tensorflow as tf\nimport tensorflow.contrib.eager as tfe\ntf.enable_eager_execution()\n\nfrom sklearn.datasets import load_iris\ndata = load_iris()\nX=data['data']\ny=data['target']\n\ndef make_dataset(X_data,y_data,n_splits):\n\n    def gen():\n        for train_index, test_index in KFold(n_splits).split(X_data):\n            X_train, X_test = X_data[train_index], X_data[test_index]\n            y_train, y_test = y_data[train_index], y_data[test_index]\n            yield X_train,y_train,X_test,y_test\n\n    return tf.data.Dataset.from_generator(gen, (tf.float64,tf.float64,tf.float64,tf.float64))\n\ndataset=make_dataset(X,y,10)\n
\n\n

Then one can iterate through the dataset either in the graph based tensorflow or using eager execution. Using eager execution:

\n\n
for X_train,y_train,X_test,y_test in tfe.Iterator(dataset):\n    ....\n
\n", "system": "" }, { "instruction": "Tensorflow and Multiprocessing: Passing Sessions", "input": "", "output": "

You can't use Python multiprocessing to pass a TensorFlow Session into a multiprocessing.Pool in the straightfoward way because the Session object can't be pickled (it's fundamentally not serializable because it may manage GPU memory and state like that).

\n\n

I'd suggest parallelizing the code using actors, which are essentially the parallel computing analog of \"objects\" and use used to manage state in the distributed setting.

\n\n

Ray is a good framework for doing this. You can define a Python class which manages the TensorFlow Session and exposes a method for running your simulation.

\n\n
import ray\nimport tensorflow as tf\n\nray.init()\n\n@ray.remote\nclass Simulator(object):\n    def __init__(self):\n        self.sess = tf.Session()\n        self.simple_model = tf.constant([1.0])\n\n    def simulate(self):\n        return self.sess.run(self.simple_model)\n\n# Create two actors.\nsimulators = [Simulator.remote() for _ in range(2)]\n\n# Run two simulations in parallel.\nresults = ray.get([s.simulate.remote() for s in simulators])\n
\n\n

Here are a few more examples of parallelizing TensorFlow with Ray.

\n\n

See the Ray documentation. Note that I'm one of the Ray developers.

\n", "system": "" }, { "instruction": "Is there a way of determining how much GPU memory is in use by TensorFlow?", "input": "", "output": "

(1) There is some limited support with Timeline for logging memory allocations. Here is an example for its usage:

\n\n
    run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)\n    run_metadata = tf.RunMetadata()\n    summary, _ = sess.run([merged, train_step],\n                          feed_dict=feed_dict(True),\n                          options=run_options,\n                          run_metadata=run_metadata)\n    train_writer.add_run_metadata(run_metadata, 'step%03d' % i)\n    train_writer.add_summary(summary, i)\n    print('Adding run metadata for', i)\n    tl = timeline.Timeline(run_metadata.step_stats)\n    print(tl.generate_chrome_trace_format(show_memory=True))\n    trace_file = tf.gfile.Open(name='timeline', mode='w')\n    trace_file.write(tl.generate_chrome_trace_format(show_memory=True))\n
\n\n

You can give this code a try with the MNIST example (mnist with summaries)

\n\n

This will generate a tracing file named timeline, which you can open with chrome://tracing. Note that this only gives an approximated GPU memory usage statistics. It basically simulated a GPU execution, but doesn't have access to the full graph metadata. It also can't know how many variables have been assigned to the GPU.

\n\n

(2) For a very coarse measure of GPU memory usage, nvidia-smi will show the total device memory usage at the time you run the command.

\n\n

nvprof can show the on-chip shared memory usage and register usage at the CUDA kernel level, but doesn't show the global/device memory usage.

\n\n

Here is an example command: nvprof --print-gpu-trace matrixMul

\n\n

And more details here:\nhttp://docs.nvidia.com/cuda/profiler-users-guide/#abstract

\n", "system": "" }, { "instruction": "Google Colab error: Import "tensorflow.keras.models" could not be resolved(reportMissingImports)", "input": "", "output": "

this worked for me.

\n
from tensorflow import keras\nfrom keras.layers import Dense\nfrom keras.models import Sequential, load_model\n
\n", "system": "" }, { "instruction": "What is the difference between MaxPool and MaxPooling layers in Keras?", "input": "", "output": "

They are basically the same thing (i.e. aliases of each other). For future readers who might want to know how this could be determined: go to the documentation page of the layer (you can use the list here) and click on "View aliases". This is then accompanied by a blue plus sign (+).

\n

For example, if you go to MaxPool2D documentation and do this, you will find MaxPooling2D in the list of aliases of this layer as follow:

\n

\"MaxPool

\n", "system": "" }, { "instruction": "what does class_mode parameter in Keras image_gen.flow_from_directory() signify?", "input": "", "output": "

class_mode: One of "categorical", "binary", "sparse", "input", or None. Default: "categorical". Determines the type of label arrays that are returned: - "categorical" will be 2D one-hot encoded labels, - "binary" will be 1D binary labels, "sparse" will be 1D integer labels, - "input" will be images identical to input images (mainly used to work with autoencoders). - If None, no labels are returned (the generator will only yield batches of image data, which is useful to use with model.predict_generator()). Please note that in case of class_mode None, the data still needs to reside in a subdirectory of directory for it to work correctly.

\n", "system": "" }, { "instruction": "Issue with add method in tensorflow : AttributeError: module 'tensorflow.python.framework.ops' has no attribute '_TensorLike'", "input": "", "output": "

For me, the fix was importing

\n\n
from tensorflow.keras import Sequential\nfrom tensorflow.keras.layers import Conv2D, Flatten, Dense\n
\n\n

instead of

\n\n
from keras import Sequential\nfrom keras.layers import Conv2D, Flatten, Dense\n
\n\n

There seems to be some weird compatibility issues between keras and tensorflow.keras

\n", "system": "" }, { "instruction": "How to downgrade tensorflow version in colab?", "input": "", "output": "

You can downgrade Tensorflow to a previous version without GPU support on Google Colab. I ran:

\n
!pip install tensorflow==1.14.0\nimport tensorflow as tf\nprint(tf.__version__)\n
\n

which initially returned

\n
2.0.0-dev20190130\n
\n

but when I returned to it after a few hours, I got the version I requested:

\n
1.14.0\n
\n

Trying to downgrade to a version with GPU support:

\n
!pip install tensorflow-gpu==1.14.0\n
\n

requires restarting the runtime and fails, as importing import tensorflow as tf returns:

\n
\nImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory\n
\n

Update

\n

When the import fails you can always downgrade CUDA to version 9.0 using following commands

\n
!wget https://developer.nvidia.com/compute/cuda/9.0/Prod/local_installers/cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64-deb\n!dpkg -i cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64-deb\n!apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub\n!apt-get update\n!apt-get install cuda=9.0.176-1\n
\n

You can check the version of CUDA by running:

\n
!nvcc --version\n
\n

Second update

\n

This code now seems to fail, see the follow-up question at How to downgrade to tensorflow-gpu version 1.12 in google colab

\n", "system": "" }, { "instruction": "Python: Neural Network - TypeError: 'History' object is not subscriptable", "input": "", "output": "

Call to model.fit() returns a History object that has a member history, which is of type dict.

\n\n

So you can replace :

\n\n
model2.fit(X, y, validation_split=0.33, epochs=30, callbacks= \n[early_stopping_monitor], verbose=False)\n
\n\n

with

\n\n
history2 = model2.fit(X, y, validation_split=0.33, epochs=30, callbacks= \n[early_stopping_monitor], verbose=False)\n
\n\n

Similarly for other models.

\n\n

and then you can use :

\n\n
plt.plot(history1.history['val_loss'], 'r', history2.history['val_loss'], 'b', \nhistory3.history['val_loss'], 'g')\n
\n", "system": "" }, { "instruction": "ModuleNotFoundError: No module named 'tensorflow.examples'", "input": "", "output": "

I think you should use like bellow on tensorflow 2

\n\n
import tensorflow_datasets\nmnist = tensorflow_datasets.load('mnist')\n
\n", "system": "" }, { "instruction": "Is there a better way to guess possible unknown variables without brute force than I am doing? Machine learning?", "input": "", "output": "

I hate to let you down but I really don't think a neural net will help at all for this problem, and IMO the best answer to your question is the advice \"don't waste your time trying neural nets\".

\n\n

An easy rule of thumb for deciding whether or not neural networks are applicable is to think, \"can an average adult human solve this problem reasonably well in a few seconds?\" For problems like \"what's in this image\", \"respond to this question\", or \"transcribe this audio clip\", the answer is yes. But for your problem, the answer is a most definite no.

\n\n

Neural networks have limitations, and one is that they don't deal well with highly logical problems. This is because the answers are generally not \"smooth\". If you take an image and slightly change a handful of pixels, the content of the image is still the same. If you take an audio clip and insert a few milliseconds of noise, a neural net will probably still be able to figure out what's said. But in your problem, change a single day's \"total basket value\" by only 1 unit, and your answer(s) will drastically change.

\n\n

It seems that the only way to solve your problem is with a \"classical\" algorithmic approach. As currently stated, there might not be any algorithm better than brute force, and it might not be possible to rule out much. For example, what if every day has the property that all fruits are priced the same? The count of each fruit can vary, as long as the total number of fruits is fixed, so the number of possibilities is still exponential in the number of fruits. If your goal is to \"produce a list of possibilities\", then no algorithm can be better than exponential time since this list can be exponentially large in some cases.

\n\n

It's interesting that part of your problem can be reduced to an integer linear program (ILP). Consider a single day, where you are given the basket total B and each fruit's cost c_i, for i=1 through i=n (if n is the total number of distinct fruits). Let's say the prices are large, so it's not obvious that you can \"fill up\" the basket with unit cost fruits. It can be hard in this situation to even find a single solution. Formulated as an ILP, this is equivalent to finding integer values of x_i such that:

\n\n
sum_i (x_i*c_i) = x_1*c_1 + x_2*c_2 + ... + x_n*c_n = B\n
\n\n

and x_i >= 0 for all 1 <= i <= n (can't have negative fruits), and sum_i x_i <= 100 (can have at most 100 fruits).

\n\n

The good news is that decent ILP solvers exist -- you can just hand over the above formulas and the solver will do its best to find a single solution. You can even add an \"objective function\" that the solver will maximize or minimize -- minimizing sum_i x_i has the effect of minimizing the total number of fruits in the basket. The bad news is that ILP is NP-complete, so there is almost no hope of finding an efficient solution for a large number of fruits (which equals the number of variables x_i).

\n\n

I think the best approach forward is to try the ILP approach, but also introduce some more constraints on the scenario. For example, what if all fruits had a different prime number cost? This has the nice property that if you find one solution, you can enumerate a bunch of other related solutions. If an apple costs m and an orange costs n, where m and n are relatively prime, then you can \"trade\" n*x apples for m*x oranges without changing the basket total, for any integer x>0 (so long as you have enough apples and oranges to begin with). If you choose all fruits to have different prime number costs, then all of the costs will be pairwise relatively prime. I think this approach will result in relatively few solutions for a given day.

\n\n

You might also consider other constraints, such as \"there can't be more than 5 fruits of a single kind in the basket\" (add the constraint x_i <= 5), or \"there can be at most 5 distinct kinds of fruits in the basket\" (but this is harder to encode as an ILP constraint). Adding these kinds of constraints will make it easier for the ILP solver to find a solution.

\n\n

Of course the above discussion is focused on a single day, and you have multiple days' worth of data. If the hardest part of the problem is finding any solution for any day at all (which happens if your prices are large), then using an ILP solver will give you a large boost. If solutions are easy to find (which happens if you have a very-low-cost fruit that can \"fill up\" your basket), and the hardest part of the problem is finding solutions that are \"consistent\" across multiple days, then the ILP approach might not be the best fit, and in general this problem seems much more difficult to reason about.

\n\n

Edit: and as mentioned in the comments, for some interpretations of the \"10% change\" constraint, you can even encode the entire multi-day problem as an ILP.

\n", "system": "" }, { "instruction": "Get the bounding box coordinates in the TensorFlow object detection API tutorial", "input": "", "output": "
\n

I tried printing output_dict['detection_boxes'] but I am not sure what\n the numbers mean

\n
\n\n

You can check out the code for yourself. visualize_boxes_and_labels_on_image_array is defined here.

\n\n

Note that you are passing use_normalized_coordinates=True. If you trace the function calls, you will see your numbers [ 0.56213236, 0.2780568 , 0.91445708, 0.69120586] etc. are the values [ymin, xmin, ymax, xmax] where the image coordinates:

\n\n
(left, right, top, bottom) = (xmin * im_width, xmax * im_width, \n                              ymin * im_height, ymax * im_height)\n
\n\n

are computed by the function:

\n\n
def draw_bounding_box_on_image(image,\n                           ymin,\n                           xmin,\n                           ymax,\n                           xmax,\n                           color='red',\n                           thickness=4,\n                           display_str_list=(),\n                           use_normalized_coordinates=True):\n  \"\"\"Adds a bounding box to an image.\n  Bounding box coordinates can be specified in either absolute (pixel) or\n  normalized coordinates by setting the use_normalized_coordinates argument.\n  Each string in display_str_list is displayed on a separate line above the\n  bounding box in black text on a rectangle filled with the input 'color'.\n  If the top of the bounding box extends to the edge of the image, the strings\n  are displayed below the bounding box.\n  Args:\n    image: a PIL.Image object.\n    ymin: ymin of bounding box.\n    xmin: xmin of bounding box.\n    ymax: ymax of bounding box.\n    xmax: xmax of bounding box.\n    color: color to draw bounding box. Default is red.\n    thickness: line thickness. Default value is 4.\n    display_str_list: list of strings to display in box\n                      (each to be shown on its own line).\n    use_normalized_coordinates: If True (default), treat coordinates\n      ymin, xmin, ymax, xmax as relative to the image.  Otherwise treat\n      coordinates as absolute.\n  \"\"\"\n  draw = ImageDraw.Draw(image)\n  im_width, im_height = image.size\n  if use_normalized_coordinates:\n    (left, right, top, bottom) = (xmin * im_width, xmax * im_width,\n                                  ymin * im_height, ymax * im_height)\n
\n", "system": "" }, { "instruction": "ValueError: Variable rnn/basic_rnn_cell/kernel already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope?", "input": "", "output": "

Does this happen when you run the model for the first time (upon opening a new python console)?

\n\n

If not, you need to clear you computational graph. You can do that by putting this line at the beginning of your script.

\n\n
tf.reset_default_graph()\n
\n", "system": "" }, { "instruction": "Building a mutlivariate, multi-task LSTM with Keras", "input": "", "output": "

So:

\n\n
\n

Firstly, how would I slice up my data for the batches? Since I have\n three full years, does it make sense to simply push through three\n batches, each time of size one year? Or does it make more sense to\n make smaller batches (say 30 days) and also to using sliding windows?\n I.e. instead of 36 batches of 30 days each, I use 36 * 6 batches of 30\n days each, each time sliding with 5 days? Or is this not really the\n way LSTMs should be used? (Note that there is quite a bit of\n seasonality in the data, to I need to catch that kind of long-term\n trend as well).

\n
\n\n

Honestly - modeling such data is something really hard. First of all - I wouldn't advise you to use LSTMs as they are rather designed to capture a little bit different kind of data (e.g. NLP or speech where it's really important to model long-term dependencies - not seasonality) and they need a lot of data in order to be learned. I would rather advise you to use either GRU or SimpleRNN which are way easier to learn and should be better for your task.

\n\n

When it comes to batching - I would definitely advise you to use a fixed window technique as it will end up in producing way more data points than feeding a whole year or a whole month. Try to set a number of days as meta parameter which will be also optimized by using different values in training and choosing the most suitable one.

\n\n

When it comes to seasonality - of course, this is a case but:

\n\n\n\n

What I advise you to do instead is:

\n\n\n\n
\n

Secondly, does it make sense to use return_sequences=True here? In\n other words, I keep my Y data as is (50, 1096, 3) so that (as far as\n I've understood it) there is a prediction at every time step for which\n a loss can be calculated against the target data? Or would I be better\n off with return_sequences=False, so that only the final value of each\n batch is used to evaluate the loss (i.e. if using yearly batches, then\n in 2016 for product 1, we evaluate against the Dec 2016 value of\n (1,1,1)).

\n
\n\n

Using return_sequences=True might be useful but only in following cases:

\n\n
    \n
  1. When a given LSTM (or another recurrent layer) will be followed by yet another recurrent layer.
  2. \n
  3. In a scenario - when you feed a shifted original series as an output by what you are simultaneously learning a model in different time windows, etc.
  4. \n
\n\n

The way described in a second point might be an interesting approach but keep the mind in mind that it might be a little bit hard to implement as you will need to rewrite your model in order to obtain a production result. What also might be harder is that you'll need to test your model against many types of time instabilities - and such approach might make this totally unfeasible.

\n\n
\n

Thirdly how should I deal with the 50 different products? They are\n different, but still strongly correlated and we've seen with other\n approaches (for example an MLP with simple time-windows) that the\n results are better when all products are considered in the same model.\n Some ideas that are currently on the table are:

\n \n \n
\n\n

I would definitely go for a first choice but before providing a detailed explanation I will discuss disadvantages of 2nd and 3rd ones:

\n\n\n\n

Before getting to my choice - let's discuss yet another issue - redundancies in your dataset. I guess that you have 3 kinds of features:

\n\n\n\n

Now you have table of size (timesteps, m * n, products). I would transform it into table of shape (timesteps, products * m + n) as general features are the same for all products. This will save you a lot of memory and also make it feasible to feed to recurrent network (keep in mind that recurrent layers in keras have only one feature dimension - whereas you had two - product and feature ones).

\n\n

So why the first approach is the best in my opinion? Becasue it takes advantage of many interesting dependencies from data. Of course - this might harm the training process - but there is an easy trick to overcome this: dimensionality reduction. You could e.g. train PCA on your 150 dimensional vector and reduce it size to a much smaller one - thanks to what you have your dependencies modeled by PCA and your output has a much more feasible size.

\n\n
\n

Fourthly, how do I deal with validation data? Normally I would just\n keep out a randomly selected sample to validate against, but here we\n need to keep the time ordering in place. So I guess the best is to\n just keep a few months aside?

\n
\n\n

This is a really important question. From my experience - you need to test your solution against many types of instabilities in order to be sure that it works fine. So a few rules which you should keep in mind:

\n\n\n\n

The last point might be a little bit vague - so to provide you some examples:

\n\n\n\n

Of course - you could try yet another hold outs.

\n\n
\n

Fifthly, and this is the part that is probably the most unclear to me\n - how can I use the actual results to perform predictions? Let's say I used return_sequences=False and I trained on all three years in three\n batches (each time up to Nov) with the goal of training the model to\n predict the next value (Dec 2014, Dec 2015, Dec 2016). If I want to\n use these results in 2017, how does this actually work? If I\n understood it correctly, the only thing I can do in this instance is\n to then feed the model all the data points for Jan to Nov 2017 and it\n will give me back a prediction for Dec 2017. Is that correct? However,\n if I were to use return_sequences=True, then trained on all data up to\n Dec 2016, would I then be able to get a prediction for Jan 2017 just\n by giving the model the features observed at Jan 2017? Or do I need to\n also give it the 12 months before Jan 2017? What about Feb 2017, do I\n in addition need to give the value for 2017, plus a further 11 months\n before that? (If it sounds like I'm confused, it's because I am!)

\n
\n\n

This depends on how you've built your model:

\n\n\n\n

Here - more info on what kind of model you've choosed is needed.

\n", "system": "" }, { "instruction": "What is the difference between [], [None], None and () for the shape of a placeholder?", "input": "", "output": "

TensorFlow uses arrays rather than tuples. It converts tuples to arrays. Therefore [] and () are equivalent.

\n\n

Now, consider this code example:

\n\n
x = tf.placeholder(dtype=tf.int32, shape=[], name=\"foo1\")\ny = tf.placeholder(dtype=tf.int32, shape=[None], name=\"foo2\")\nz = tf.placeholder(dtype=tf.int32, shape=None, name=\"foo3\")\n\nval1 = np.array((1, 2, 3))\nval2 = 45\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n\n    #print(sess.run(x, feed_dict = {x: val1}))  # Fails\n    print(sess.run(y, feed_dict = {y: val1}))\n    print(sess.run(z, feed_dict = {z: val1}))\n\n    print(sess.run(x, feed_dict = {x: val2}))\n    #print(sess.run(y, feed_dict = {y: val2}))  # Fails\n    print(sess.run(z, feed_dict = {z: val2}))\n
\n\n

As can be seen, placeholder with [] shape takes a single scalar value directly. Placeholder with [None] shape takes a 1-dimensional array and placeholder with None shape can take in any value while computation takes place.

\n", "system": "" }, { "instruction": "Keras input explanation: input_shape, units, batch_size, dim, etc", "input": "", "output": "

Units:

\n\n
\n

The amount of \"neurons\", or \"cells\", or whatever the layer has inside it.

\n
\n\n

It's a property of each layer, and yes, it's related to the output shape (as we will see later). In your picture, except for the input layer, which is conceptually different from other layers, you have:

\n\n\n\n

Shapes

\n\n

Shapes are consequences of the model's configuration. Shapes are tuples representing how many elements an array or tensor has in each dimension.

\n\n

Ex: a shape (30,4,10) means an array or tensor with 3 dimensions, containing 30 elements in the first dimension, 4 in the second and 10 in the third, totaling 30*4*10 = 1200 elements or numbers.

\n\n

The input shape

\n\n

What flows between layers are tensors. Tensors can be seen as matrices, with shapes.

\n\n

In Keras, the input layer itself is not a layer, but a tensor. It's the starting tensor you send to the first hidden layer. This tensor must have the same shape as your training data.

\n\n

Example: if you have 30 images of 50x50 pixels in RGB (3 channels), the shape of your input data is (30,50,50,3). Then your input layer tensor, must have this shape (see details in the \"shapes in keras\" section).

\n\n

Each type of layer requires the input with a certain number of dimensions:

\n\n\n\n

Now, the input shape is the only one you must define, because your model cannot know it. Only you know that, based on your training data.

\n\n

All the other shapes are calculated automatically based on the units and particularities of each layer.

\n\n

Relation between shapes and units - The output shape

\n\n

Given the input shape, all other shapes are results of layers calculations.

\n\n

The \"units\" of each layer will define the output shape (the shape of the tensor that is produced by the layer and that will be the input of the next layer).

\n\n

Each type of layer works in a particular way. Dense layers have output shape based on \"units\", convolutional layers have output shape based on \"filters\". But it's always based on some layer property. (See the documentation for what each layer outputs)

\n\n

Let's show what happens with \"Dense\" layers, which is the type shown in your graph.

\n\n

A dense layer has an output shape of (batch_size,units). So, yes, units, the property of the layer, also defines the output shape.

\n\n\n\n

Weights

\n\n

Weights will be entirely automatically calculated based on the input and the output shapes. Again, each type of layer works in a certain way. But the weights will be a matrix capable of transforming the input shape into the output shape by some mathematical operation.

\n\n

In a dense layer, weights multiply all inputs. It's a matrix with one column per input and one row per unit, but this is often not important for basic works.

\n\n

In the image, if each arrow had a multiplication number on it, all numbers together would form the weight matrix.

\n\n

Shapes in Keras

\n\n

Earlier, I gave an example of 30 images, 50x50 pixels and 3 channels, having an input shape of (30,50,50,3).

\n\n

Since the input shape is the only one you need to define, Keras will demand it in the first layer.

\n\n

But in this definition, Keras ignores the first dimension, which is the batch size. Your model should be able to deal with any batch size, so you define only the other dimensions:

\n\n
input_shape = (50,50,3)\n    #regardless of how many images I have, each image has this shape        \n
\n\n

Optionally, or when it's required by certain kinds of models, you can pass the shape containing the batch size via batch_input_shape=(30,50,50,3) or batch_shape=(30,50,50,3). This limits your training possibilities to this unique batch size, so it should be used only when really required.

\n\n

Either way you choose, tensors in the model will have the batch dimension.

\n\n

So, even if you used input_shape=(50,50,3), when keras sends you messages, or when you print the model summary, it will show (None,50,50,3).

\n\n

The first dimension is the batch size, it's None because it can vary depending on how many examples you give for training. (If you defined the batch size explicitly, then the number you defined will appear instead of None)

\n\n

Also, in advanced works, when you actually operate directly on the tensors (inside Lambda layers or in the loss function, for instance), the batch size dimension will be there.

\n\n\n\n

Dim

\n\n

And in the end, what is dim?

\n\n

If your input shape has only one dimension, you don't need to give it as a tuple, you give input_dim as a scalar number.

\n\n

So, in your model, where your input layer has 3 elements, you can use any of these two:

\n\n\n\n

But when dealing directly with the tensors, often dim will refer to how many dimensions a tensor has. For instance a tensor with shape (25,10909) has 2 dimensions.

\n\n
\n\n

Defining your image in Keras

\n\n

Keras has two ways of doing it, Sequential models, or the functional API Model. I don't like using the sequential model, later you will have to forget it anyway because you will want models with branches.

\n\n

PS: here I ignored other aspects, such as activation functions.

\n\n

With the Sequential model:

\n\n
from keras.models import Sequential  \nfrom keras.layers import *  \n\nmodel = Sequential()    \n\n#start from the first hidden layer, since the input is not actually a layer   \n#but inform the shape of the input, with 3 elements.    \nmodel.add(Dense(units=4,input_shape=(3,))) #hidden layer 1 with input\n\n#further layers:    \nmodel.add(Dense(units=4)) #hidden layer 2\nmodel.add(Dense(units=1)) #output layer   \n
\n\n

With the functional API Model:

\n\n
from keras.models import Model   \nfrom keras.layers import * \n\n#Start defining the input tensor:\ninpTensor = Input((3,))   \n\n#create the layers and pass them the input tensor to get the output tensor:    \nhidden1Out = Dense(units=4)(inpTensor)    \nhidden2Out = Dense(units=4)(hidden1Out)    \nfinalOut = Dense(units=1)(hidden2Out)   \n\n#define the model's start and end points    \nmodel = Model(inpTensor,finalOut)\n
\n\n

Shapes of the tensors

\n\n

Remember you ignore batch sizes when defining layers:

\n\n\n", "system": "" }, { "instruction": "Understanding Keras Long Short Term Memories (LSTMs)", "input": "", "output": "

As a complement to the accepted answer, this answer shows keras behaviors and how to achieve each picture.

\n\n

General Keras behavior

\n\n

The standard keras internal processing is always a many to many as in the following picture (where I used features=2, pressure and temperature, just as an example):

\n\n

\"ManyToMany\"

\n\n

In this image, I increased the number of steps to 5, to avoid confusion with the other dimensions.

\n\n

For this example:

\n\n\n\n

Our input array should then be something shaped as (N,5,2):

\n\n
        [     Step1      Step2      Step3      Step4      Step5\nTank A:    [[Pa1,Ta1], [Pa2,Ta2], [Pa3,Ta3], [Pa4,Ta4], [Pa5,Ta5]],\nTank B:    [[Pb1,Tb1], [Pb2,Tb2], [Pb3,Tb3], [Pb4,Tb4], [Pb5,Tb5]],\n  ....\nTank N:    [[Pn1,Tn1], [Pn2,Tn2], [Pn3,Tn3], [Pn4,Tn4], [Pn5,Tn5]],\n        ]\n
\n\n

Inputs for sliding windows

\n\n

Often, LSTM layers are supposed to process the entire sequences. Dividing windows may not be the best idea. The layer has internal states about how a sequence is evolving as it steps forward. Windows eliminate the possibility of learning long sequences, limiting all sequences to the window size.

\n\n

In windows, each window is part of a long original sequence, but by Keras they will be seen each as an independent sequence:

\n\n
        [     Step1    Step2    Step3    Step4    Step5\nWindow  A:  [[P1,T1], [P2,T2], [P3,T3], [P4,T4], [P5,T5]],\nWindow  B:  [[P2,T2], [P3,T3], [P4,T4], [P5,T5], [P6,T6]],\nWindow  C:  [[P3,T3], [P4,T4], [P5,T5], [P6,T6], [P7,T7]],\n  ....\n        ]\n
\n\n

Notice that in this case, you have initially only one sequence, but you're dividing it in many sequences to create windows.

\n\n

The concept of \"what is a sequence\" is abstract. The important parts are:

\n\n\n\n

Achieving each case with \"single layers\"

\n\n

Achieving standard many to many:

\n\n

\"StandardManyToMany\"

\n\n

You can achieve many to many with a simple LSTM layer, using return_sequences=True:

\n\n
outputs = LSTM(units, return_sequences=True)(inputs)\n\n#output_shape -> (batch_size, steps, units)\n
\n\n

Achieving many to one:

\n\n

Using the exact same layer, keras will do the exact same internal preprocessing, but when you use return_sequences=False (or simply ignore this argument), keras will automatically discard the steps previous to the last:

\n\n

\"ManyToOne\"

\n\n
outputs = LSTM(units)(inputs)\n\n#output_shape -> (batch_size, units) --> steps were discarded, only the last was returned\n
\n\n

Achieving one to many

\n\n

Now, this is not supported by keras LSTM layers alone. You will have to create your own strategy to multiplicate the steps. There are two good approaches:

\n\n\n\n

One to many with repeat vector

\n\n

In order to fit to keras standard behavior, we need inputs in steps, so, we simply repeat the inputs for the length we want:

\n\n

\"OneToManyRepeat\"

\n\n
outputs = RepeatVector(steps)(inputs) #where inputs is (batch,features)\noutputs = LSTM(units,return_sequences=True)(outputs)\n\n#output_shape -> (batch_size, steps, units)\n
\n\n

Understanding stateful = True

\n\n

Now comes one of the possible usages of stateful=True (besides avoiding loading data that can't fit your computer's memory at once)

\n\n

Stateful allows us to input \"parts\" of the sequences in stages. The difference is:

\n\n\n\n

It's like dividing the sequences in windows too, with these two main differences:

\n\n\n\n

In stateful=True, every new batch will be interpreted as continuing the previous batch (until you call model.reset_states()).

\n\n\n\n

Example of inputs, batch 1 contains steps 1 and 2, batch 2 contains steps 3 to 5:

\n\n
                   BATCH 1                           BATCH 2\n        [     Step1      Step2        |    [    Step3      Step4      Step5\nTank A:    [[Pa1,Ta1], [Pa2,Ta2],     |       [Pa3,Ta3], [Pa4,Ta4], [Pa5,Ta5]],\nTank B:    [[Pb1,Tb1], [Pb2,Tb2],     |       [Pb3,Tb3], [Pb4,Tb4], [Pb5,Tb5]],\n  ....                                |\nTank N:    [[Pn1,Tn1], [Pn2,Tn2],     |       [Pn3,Tn3], [Pn4,Tn4], [Pn5,Tn5]],\n        ]                                  ]\n
\n\n

Notice the alignment of tanks in batch 1 and batch 2! That's why we need shuffle=False (unless we are using only one sequence, of course).

\n\n

You can have any number of batches, indefinitely. (For having variable lengths in each batch, use input_shape=(None,features).

\n\n

One to many with stateful=True

\n\n

For our case here, we are going to use only 1 step per batch, because we want to get one output step and make it be an input.

\n\n

Please notice that the behavior in the picture is not \"caused by\" stateful=True. We will force that behavior in a manual loop below. In this example, stateful=True is what \"allows\" us to stop the sequence, manipulate what we want, and continue from where we stopped.

\n\n

\"OneToManyStateful\"

\n\n

Honestly, the repeat approach is probably a better choice for this case. But since we're looking into stateful=True, this is a good example. The best way to use this is the next \"many to many\" case.

\n\n

Layer:

\n\n
outputs = LSTM(units=features, \n               stateful=True, \n               return_sequences=True, #just to keep a nice output shape even with length 1\n               input_shape=(None,features))(inputs) \n    #units = features because we want to use the outputs as inputs\n    #None because we want variable length\n\n#output_shape -> (batch_size, steps, units) \n
\n\n

Now, we're going to need a manual loop for predictions:

\n\n
input_data = someDataWithShape((batch, 1, features))\n\n#important, we're starting new sequences, not continuing old ones:\nmodel.reset_states()\n\noutput_sequence = []\nlast_step = input_data\nfor i in steps_to_predict:\n\n    new_step = model.predict(last_step)\n    output_sequence.append(new_step)\n    last_step = new_step\n\n #end of the sequences\n model.reset_states()\n
\n\n

Many to many with stateful=True

\n\n

Now, here, we get a very nice application: given an input sequence, try to predict its future unknown steps.

\n\n

We're using the same method as in the \"one to many\" above, with the difference that:

\n\n\n\n

\"ManyToManyStateful\"

\n\n

Layer (same as above):

\n\n
outputs = LSTM(units=features, \n               stateful=True, \n               return_sequences=True, \n               input_shape=(None,features))(inputs) \n    #units = features because we want to use the outputs as inputs\n    #None because we want variable length\n\n#output_shape -> (batch_size, steps, units) \n
\n\n

Training:

\n\n

We are going to train our model to predict the next step of the sequences:

\n\n
totalSequences = someSequencesShaped((batch, steps, features))\n    #batch size is usually 1 in these cases (often you have only one Tank in the example)\n\nX = totalSequences[:,:-1] #the entire known sequence, except the last step\nY = totalSequences[:,1:] #one step ahead of X\n\n#loop for resetting states at the start/end of the sequences:\nfor epoch in range(epochs):\n    model.reset_states()\n    model.train_on_batch(X,Y)\n
\n\n

Predicting:

\n\n

The first stage of our predicting involves \"ajusting the states\". That's why we're going to predict the entire sequence again, even if we already know this part of it:

\n\n
model.reset_states() #starting a new sequence\npredicted = model.predict(totalSequences)\nfirstNewStep = predicted[:,-1:] #the last step of the predictions is the first future step\n
\n\n

Now we go to the loop as in the one to many case. But don't reset states here!. We want the model to know in which step of the sequence it is (and it knows it's at the first new step because of the prediction we just made above)

\n\n
output_sequence = [firstNewStep]\nlast_step = firstNewStep\nfor i in steps_to_predict:\n\n    new_step = model.predict(last_step)\n    output_sequence.append(new_step)\n    last_step = new_step\n\n #end of the sequences\n model.reset_states()\n
\n\n

This approach was used in these answers and file:

\n\n\n\n

Achieving complex configurations

\n\n

In all examples above, I showed the behavior of \"one layer\".

\n\n

You can, of course, stack many layers on top of each other, not necessarly all following the same pattern, and create your own models.

\n\n

One interesting example that has been appearing is the \"autoencoder\" that has a \"many to one encoder\" followed by a \"one to many\" decoder:

\n\n

Encoder:

\n\n
inputs = Input((steps,features))\n\n#a few many to many layers:\noutputs = LSTM(hidden1,return_sequences=True)(inputs)\noutputs = LSTM(hidden2,return_sequences=True)(outputs)    \n\n#many to one layer:\noutputs = LSTM(hidden3)(outputs)\n\nencoder = Model(inputs,outputs)\n
\n\n

Decoder:

\n\n

Using the \"repeat\" method;

\n\n
inputs = Input((hidden3,))\n\n#repeat to make one to many:\noutputs = RepeatVector(steps)(inputs)\n\n#a few many to many layers:\noutputs = LSTM(hidden4,return_sequences=True)(outputs)\n\n#last layer\noutputs = LSTM(features,return_sequences=True)(outputs)\n\ndecoder = Model(inputs,outputs)\n
\n\n

Autoencoder:

\n\n
inputs = Input((steps,features))\noutputs = encoder(inputs)\noutputs = decoder(outputs)\n\nautoencoder = Model(inputs,outputs)\n
\n\n

Train with fit(X,X)

\n\n

Additional explanations

\n\n

If you want details about how steps are calculated in LSTMs, or details about the stateful=True cases above, you can read more in this answer: Doubts regarding `Understanding Keras LSTMs`

\n", "system": "" }, { "instruction": "Why binary_crossentropy and categorical_crossentropy give different performances for the same problem?", "input": "", "output": "

The reason for this apparent performance discrepancy between categorical & binary cross entropy is what user xtof54 has already reported in his answer below, i.e.:

\n
\n

the accuracy computed with the Keras method evaluate is just plain\nwrong when using binary_crossentropy with more than 2 labels

\n
\n

I would like to elaborate more on this, demonstrate the actual underlying issue, explain it, and offer a remedy.

\n

This behavior is not a bug; the underlying reason is a rather subtle & undocumented issue at how Keras actually guesses which accuracy to use, depending on the loss function you have selected, when you include simply metrics=['accuracy'] in your model compilation. In other words, while your first compilation option

\n\n
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n
\n

is valid, your second one:

\n
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n
\n

will not produce what you expect, but the reason is not the use of binary cross entropy (which, at least in principle, is an absolutely valid loss function).

\n

Why is that? If you check the metrics source code, Keras does not define a single accuracy metric, but several different ones, among them binary_accuracy and categorical_accuracy. What happens under the hood is that, since you have selected binary cross entropy as your loss function and have not specified a particular accuracy metric, Keras (wrongly...) infers that you are interested in the binary_accuracy, and this is what it returns - while in fact you are interested in the categorical_accuracy.

\n

Let's verify that this is the case, using the MNIST CNN example in Keras, with the following modification:

\n
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])  # WRONG way\n\nmodel.fit(x_train, y_train,\n          batch_size=batch_size,\n          epochs=2,  # only 2 epochs, for demonstration purposes\n          verbose=1,\n          validation_data=(x_test, y_test))\n\n# Keras reported accuracy:\nscore = model.evaluate(x_test, y_test, verbose=0) \nscore[1]\n# 0.9975801164627075\n\n# Actual accuracy calculated manually:\nimport numpy as np\ny_pred = model.predict(x_test)\nacc = sum([np.argmax(y_test[i])==np.argmax(y_pred[i]) for i in range(10000)])/10000\nacc\n# 0.98780000000000001\n\nscore[1]==acc\n# False    \n
\n

To remedy this, i.e. to use indeed binary cross entropy as your loss function (as I said, nothing wrong with this, at least in principle) while still getting the categorical accuracy required by the problem at hand, you should ask explicitly for categorical_accuracy in the model compilation as follows:

\n
from keras.metrics import categorical_accuracy\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=[categorical_accuracy])\n
\n

In the MNIST example, after training, scoring, and predicting the test set as I show above, the two metrics now are the same, as they should be:

\n
# Keras reported accuracy:\nscore = model.evaluate(x_test, y_test, verbose=0) \nscore[1]\n# 0.98580000000000001\n\n# Actual accuracy calculated manually:\ny_pred = model.predict(x_test)\nacc = sum([np.argmax(y_test[i])==np.argmax(y_pred[i]) for i in range(10000)])/10000\nacc\n# 0.98580000000000001\n\nscore[1]==acc\n# True    \n
\n

System setup:

\n
Python version 3.5.3\nTensorflow version 1.2.1\nKeras version 2.0.4\n
\n

UPDATE: After my post, I discovered that this issue had already been identified in this answer.

\n", "system": "" }, { "instruction": "Where do I call the BatchNormalization function in Keras?", "input": "", "output": "

As Pavel said, Batch Normalization is just another layer, so you can use it as such to create your desired network architecture.

\n

The general use case is to use BN between the linear and non-linear layers in your network, because it normalizes the input to your activation function, so that you're centered in the linear section of the activation function (such as Sigmoid). There's a small discussion of it here

\n

In your case above, this might look like:

\n
# import BatchNormalization\nfrom keras.layers.normalization import BatchNormalization\n\n# instantiate model\nmodel = Sequential()\n\n# we can think of this chunk as the input layer\nmodel.add(Dense(64, input_dim=14, init='uniform'))\nmodel.add(BatchNormalization())\nmodel.add(Activation('tanh'))\nmodel.add(Dropout(0.5))\n\n# we can think of this chunk as the hidden layer    \nmodel.add(Dense(64, init='uniform'))\nmodel.add(BatchNormalization())\nmodel.add(Activation('tanh'))\nmodel.add(Dropout(0.5))\n\n# we can think of this chunk as the output layer\nmodel.add(Dense(2, init='uniform'))\nmodel.add(BatchNormalization())\nmodel.add(Activation('softmax'))\n\n# setting up the optimization of our weights \nsgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)\nmodel.compile(loss='binary_crossentropy', optimizer=sgd)\n\n# running the fitting\nmodel.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2)\n
\n", "system": "" }, { "instruction": "What is the use of verbose in Keras while validating the model?", "input": "", "output": "

Check documentation for model.fit here.

\n\n

By setting verbose 0, 1 or 2 you just say how do you want to 'see' the training progress for each epoch.

\n\n

verbose=0 will show you nothing (silent)

\n\n

verbose=1 will show you an animated progress bar like this:

\n\n

\"progres_bar\"

\n\n

verbose=2 will just mention the number of epoch like this:

\n\n

\"enter

\n", "system": "" }, { "instruction": "How to fix 'Object arrays cannot be loaded when allow_pickle=False' for imdb.load_data() function?", "input": "", "output": "

Here's a trick to force imdb.load_data to allow pickle by, in your notebook, replacing this line:

\n\n
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)\n
\n\n

by this:

\n\n
import numpy as np\n# save np.load\nnp_load_old = np.load\n\n# modify the default parameters of np.load\nnp.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)\n\n# call load_data with allow_pickle implicitly set to true\n(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)\n\n# restore np.load for future normal usage\nnp.load = np_load_old\n
\n", "system": "" }, { "instruction": "Many to one and many to many LSTM examples in Keras", "input": "", "output": "\n

So:

\n
    \n
  1. One-to-one: you could use a Dense layer as you are not processing sequences:

    \n
    model.add(Dense(output_size, input_shape=input_shape))\n
    \n
  2. \n
  3. One-to-many: this option is not supported well as chaining models is not very easy in Keras, so the following version is the easiest one:

    \n
    model.add(RepeatVector(number_of_times, input_shape=input_shape))\nmodel.add(LSTM(output_size, return_sequences=True))\n
    \n
  4. \n
  5. Many-to-one: actually, your code snippet is (almost) an example of this approach:

    \n
    model = Sequential()\nmodel.add(LSTM(1, input_shape=(timesteps, data_dim)))\n
    \n
  6. \n
  7. Many-to-many: This is the easiest snippet when the length of the input and output matches the number of recurrent steps:

    \n
    model = Sequential()\nmodel.add(LSTM(1, input_shape=(timesteps, data_dim), return_sequences=True))\n
    \n
  8. \n
  9. Many-to-many when number of steps differ from input/output length: this is freaky hard in Keras. There are no easy code snippets to code that.

    \n
  10. \n
\n

EDIT: Ad 5

\n

In one of my recent applications, we implemented something which might be similar to many-to-many from the 4th image. In case you want to have a network with the following architecture (when an input is longer than the output):

\n
                                        O O O\n                                        | | |\n                                  O O O O O O\n                                  | | | | | | \n                                  O O O O O O\n
\n

You could achieve this in the following manner:

\n
model = Sequential()\nmodel.add(LSTM(1, input_shape=(timesteps, data_dim), return_sequences=True))\nmodel.add(Lambda(lambda x: x[:, -N:, :])) #Select last N from output\n
\n

Where N is the number of last steps you want to cover (on image N = 3).

\n

From this point getting to:

\n
                                        O O O\n                                        | | |\n                                  O O O O O O\n                                  | | | \n                                  O O O \n
\n

is as simple as artificial padding sequence of length N using e.g. with 0 vectors, in order to adjust it to an appropriate size.

\n", "system": "" }, { "instruction": "How do I use the Tensorboard callback of Keras?", "input": "", "output": "
keras.callbacks.TensorBoard(log_dir='./Graph', histogram_freq=0,  \n          write_graph=True, write_images=True)\n
\n

This line creates a Callback Tensorboard object, you should capture that object and give it to the fit function of your model.

\n
tbCallBack = keras.callbacks.TensorBoard(log_dir='./Graph', histogram_freq=0, write_graph=True, write_images=True)\n...\nmodel.fit(...inputs and parameters..., callbacks=[tbCallBack])\n
\n

This way you gave your callback object to the function. It will be run during the training and will output files that can be used with tensorboard.

\n

If you want to visualize the files created during training, run in your terminal

\n
tensorboard --logdir path_to_current_dir/Graph \n
\n", "system": "" }, { "instruction": "NaN loss when training regression network", "input": "", "output": "

Regression with neural networks is hard to get working because the output is unbounded, so you are especially prone to the exploding gradients problem (the likely cause of the nans).

\n\n

Historically, one key solution to exploding gradients was to reduce the learning rate, but with the advent of per-parameter adaptive learning rate algorithms like Adam, you no longer need to set a learning rate to get good performance. There is very little reason to use SGD with momentum anymore unless you're a neural network fiend and know how to tune the learning schedule.

\n\n

Here are some things you could potentially try:

\n\n
    \n
  1. Normalize your outputs by quantile normalizing or z scoring. To be rigorous, compute this transformation on the training data, not on the entire dataset. For example, with quantile normalization, if an example is in the 60th percentile of the training set, it gets a value of 0.6. (You can also shift the quantile normalized values down by 0.5 so that the 0th percentile is -0.5 and the 100th percentile is +0.5).

  2. \n
  3. Add regularization, either by increasing the dropout rate or adding L1 and L2 penalties to the weights. L1 regularization is analogous to feature selection, and since you said that reducing the number of features to 5 gives good performance, L1 may also.

  4. \n
  5. If these still don't help, reduce the size of your network. This is not always the best idea since it can harm performance, but in your case you have a large number of first-layer neurons (1024) relative to input features (35) so it may help.

  6. \n
  7. Increase the batch size from 32 to 128. 128 is fairly standard and could potentially increase the stability of the optimization.

  8. \n
\n", "system": "" }, { "instruction": "How to load a model from an HDF5 file in Keras?", "input": "", "output": "

If you stored the complete model, not only the weights, in the HDF5 file, then it is as simple as

\n\n
from keras.models import load_model\nmodel = load_model('model.h5')\n
\n", "system": "" }, { "instruction": "keras: how to save the training history attribute of the history object", "input": "", "output": "

What I use is the following:

\n
with open('/trainHistoryDict', 'wb') as file_pi:\n    pickle.dump(history.history, file_pi)\n
\n

In this way I save the history as a dictionary in case I want to plot the loss or accuracy later on. Later, when you want to load the history again, you can use:

\n
with open('/trainHistoryDict', "rb") as file_pi:\n    history = pickle.load(file_pi)\n
\n

Why choose pickle over json?

\n

The comment under this answer accurately states:

\n
\n

[Storing the history as json] does not work anymore in tensorflow keras. I had issues with: TypeError: Object of type 'float32' is not JSON serializable.

\n
\n

There are ways to tell json how to encode numpy objects, which you can learn about from this other question, so there's nothing wrong with using json in this case, it's just more complicated than simply dumping to a pickle file.

\n", "system": "" }, { "instruction": "What is the role of TimeDistributed layer in Keras?", "input": "", "output": "

In keras - while building a sequential model - usually the second dimension (one after sample dimension) - is related to a time dimension. This means that if for example, your data is 5-dim with (sample, time, width, length, channel) you could apply a convolutional layer using TimeDistributed (which is applicable to 4-dim with (sample, width, length, channel)) along a time dimension (applying the same layer to each time slice) in order to obtain 5-d output.

\n\n

The case with Dense is that in keras from version 2.0 Dense is by default applied to only last dimension (e.g. if you apply Dense(10) to input with shape (n, m, o, p) you'll get output with shape (n, m, o, 10)) so in your case Dense and TimeDistributed(Dense) are equivalent.

\n", "system": "" }, { "instruction": "How to concatenate two layers in keras?", "input": "", "output": "

You're getting the error because result defined as Sequential() is just a container for the model and you have not defined an input for it.

\n\n

Given what you're trying to build set result to take the third input x3.

\n
first = Sequential()\nfirst.add(Dense(1, input_shape=(2,), activation='sigmoid'))\n\nsecond = Sequential()\nsecond.add(Dense(1, input_shape=(1,), activation='sigmoid'))\n\nthird = Sequential()\n# of course you must provide the input to result which will be your x3\nthird.add(Dense(1, input_shape=(1,), activation='sigmoid'))\n\n# lets say you add a few more layers to first and second.\n# concatenate them\nmerged = Concatenate([first, second])\n\n# then concatenate the two outputs\n\nresult = Concatenate([merged,  third])\n\nada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0)\n\nresult.compile(optimizer=ada_grad, loss='binary_crossentropy',\n               metrics=['accuracy'])\n
\n

However, my preferred way of building a model that has this type of input structure would be to use the functional api.

\n

Here is an implementation of your requirements to get you started:

\n
from keras.models import Model\nfrom keras.layers import Concatenate, Dense, LSTM, Input, concatenate\nfrom keras.optimizers import Adagrad\n\nfirst_input = Input(shape=(2, ))\nfirst_dense = Dense(1, )(first_input)\n\nsecond_input = Input(shape=(2, ))\nsecond_dense = Dense(1, )(second_input)\n\nmerge_one = concatenate([first_dense, second_dense])\n\nthird_input = Input(shape=(1, ))\nmerge_two = concatenate([merge_one, third_input])\n\nmodel = Model(inputs=[first_input, second_input, third_input], outputs=merge_two)\nada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0)\nmodel.compile(optimizer=ada_grad, loss='binary_crossentropy',\n               metrics=['accuracy'])\n
\n

To answer the question in the comments:

\n
    \n
  1. How are result and merged connected? Assuming you mean how are they concatenated.
  2. \n
\n

Concatenation works like this:

\n
  a        b         c\na b c   g h i    a b c g h i\nd e f   j k l    d e f j k l\n
\n

i.e rows are just joined.

\n
    \n
  1. Now, x1 is input to first, x2 is input into second and x3 input into third.
  2. \n
\n", "system": "" }, { "instruction": "What does Keras Tokenizer method exactly do?", "input": "", "output": "

From the source code:

\n\n
    \n
  1. fit_on_texts Updates internal vocabulary based on a list of texts. This method creates the vocabulary index based on word frequency. So if you give it something like, \"The cat sat on the mat.\" It will create a dictionary s.t. word_index[\"the\"] = 1; word_index[\"cat\"] = 2 it is word -> index dictionary so every word gets a unique integer value. 0 is reserved for padding. So lower integer means more frequent word (often the first few are stop words because they appear a lot).
  2. \n
  3. texts_to_sequences Transforms each text in texts to a sequence of integers. So it basically takes each word in the text and replaces it with its corresponding integer value from the word_index dictionary. Nothing more, nothing less, certainly no magic involved.
  4. \n
\n\n

Why don't combine them? Because you almost always fit once and convert to sequences many times. You will fit on your training corpus once and use that exact same word_index dictionary at train / eval / testing / prediction time to convert actual text into sequences to feed them to the network. So it makes sense to keep those methods separate.

\n", "system": "" }, { "instruction": "Which parameters should be used for early stopping?", "input": "", "output": "

\"early

\n\n

Early stopping is basically stopping the training once your loss starts to increase (or in other words validation accuracy starts to decrease). According to documents it is used as follows;

\n\n
keras.callbacks.EarlyStopping(monitor='val_loss',\n                              min_delta=0,\n                              patience=0,\n                              verbose=0, mode='auto')\n
\n\n

Values depends on your implementation (problem, batch size etc...) but generally to prevent overfitting I would use;

\n\n
    \n
  1. Monitor the validation loss (need to use cross\nvalidation or at least train/test sets) by setting the monitor\nargument to 'val_loss'.
  2. \n
  3. min_delta is a threshold to whether quantify a loss at some epoch as\nimprovement or not. If the difference of loss is below min_delta, it is quantified\nas no improvement. Better to leave it as 0 since we're interested in\nwhen loss becomes worse.
  4. \n
  5. patience argument represents the number of epochs before stopping once your loss starts to increase (stops improving).\nThis depends on your implementation, if you use very small batches\nor a large learning rate your loss zig-zag (accuracy will be more noisy) so better set a\nlarge patience argument. If you use large batches and a small\nlearning rate your loss will be smoother so you can use a\nsmaller patience argument. Either way I'll leave it as 2 so I would\ngive the model more chance.
  6. \n
  7. verbose decides what to print, leave it at default (0).
  8. \n
  9. mode argument depends on what direction your monitored quantity\nhas (is it supposed to be decreasing or increasing), since we monitor the loss, we can use min. But let's leave keras\nhandle that for us and set that to auto
  10. \n
\n\n

So I would use something like this and experiment by plotting the error loss with and without early stopping.

\n\n
keras.callbacks.EarlyStopping(monitor='val_loss',\n                              min_delta=0,\n                              patience=2,\n                              verbose=0, mode='auto')\n
\n\n
\n\n

For possible ambiguity on how callbacks work, I'll try to explain more. Once you call fit(... callbacks=[es]) on your model, Keras calls given callback objects predetermined functions. These functions can be called on_train_begin, on_train_end, on_epoch_begin, on_epoch_end and on_batch_begin, on_batch_end. Early stopping callback is called on every epoch end, compares the best monitored value with the current one and stops if conditions are met (how many epochs have past since the observation of the best monitored value and is it more than patience argument, the difference between last value is bigger than min_delta etc..).

\n\n

As pointed by @BrentFaust in comments, model's training will continue until either Early Stopping conditions are met or epochs parameter (default=10) in fit() is satisfied. Setting an Early Stopping callback will not make the model to train beyond its epochs parameter. So calling fit() function with a larger epochs value would benefit more from Early Stopping callback.

\n", "system": "" }, { "instruction": "Why doesn't plt.imshow() display the image?", "input": "", "output": "

The solution was as simple as adding plt.show() at the end of the code snippet:

\n\n
import numpy as np\nnp.random.seed(123)\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation, Flatten\nfrom keras.layers import Convolution2D, MaxPooling2D\nfrom keras.utils import np_utils\nfrom keras.datasets import mnist\n(X_train,y_train),(X_test,y_test) = mnist.load_data()\nprint X_train.shape\nfrom matplotlib import pyplot as plt\nplt.imshow(X_train[0])\nplt.show()\n
\n", "system": "" }, { "instruction": "How to get reproducible results in keras", "input": "", "output": "

You can find the answer at the Keras docs: https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development.

\n\n

In short, to be absolutely sure that you will get reproducible results with your python script on one computer's/laptop's CPU then you will have to do the following:

\n\n
    \n
  1. Set the PYTHONHASHSEED environment variable at a fixed value
  2. \n
  3. Set the python built-in pseudo-random generator at a fixed value
  4. \n
  5. Set the numpy pseudo-random generator at a fixed value
  6. \n
  7. Set the tensorflow pseudo-random generator at a fixed value
  8. \n
  9. Configure a new global tensorflow session
  10. \n
\n\n

Following the Keras link at the top, the source code I am using is the following:

\n\n
# Seed value\n# Apparently you may use different seed values at each stage\nseed_value= 0\n\n# 1. Set the `PYTHONHASHSEED` environment variable at a fixed value\nimport os\nos.environ['PYTHONHASHSEED']=str(seed_value)\n\n# 2. Set the `python` built-in pseudo-random generator at a fixed value\nimport random\nrandom.seed(seed_value)\n\n# 3. Set the `numpy` pseudo-random generator at a fixed value\nimport numpy as np\nnp.random.seed(seed_value)\n\n# 4. Set the `tensorflow` pseudo-random generator at a fixed value\nimport tensorflow as tf\ntf.random.set_seed(seed_value)\n# for later versions: \n# tf.compat.v1.set_random_seed(seed_value)\n\n# 5. Configure a new global `tensorflow` session\nfrom keras import backend as K\nsession_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)\nsess = tf.Session(graph=tf.get_default_graph(), config=session_conf)\nK.set_session(sess)\n# for later versions:\n# session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)\n# sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)\n# tf.compat.v1.keras.backend.set_session(sess)\n
\n\n

It is needless to say that you do not have to to specify any seed or random_state at the numpy, scikit-learn or tensorflow/keras functions that you are using in your python script exactly because with the source code above we set globally their pseudo-random generators at a fixed value.

\n", "system": "" }, { "instruction": "Keras, how do I predict after I trained a model?", "input": "", "output": "

model.predict() expects the first parameter to be a numpy array. You supply a list, which does not have the shape attribute a numpy array has.

\n\n

Otherwise your code looks fine, except that you are doing nothing with the prediction. Make sure you store it in a variable, for example like this:

\n\n
prediction = model.predict(np.array(tk.texts_to_sequences(text)))\nprint(prediction)\n
\n", "system": "" }, { "instruction": "What is an Embedding in Keras?", "input": "", "output": "

As far as I know, the Embedding layer is a simple matrix multiplication that transforms words into their corresponding word embeddings.

\n\n

The weights of the Embedding layer are of the shape (vocabulary_size, embedding_dimension). For each training sample, its input are integers, which represent certain words. The integers are in the range of the vocabulary size. The Embedding layer transforms each integer i into the ith line of the embedding weights matrix.

\n\n

In order to quickly do this as a matrix multiplication, the input integers are not stored as a list of integers but as a one-hot matrix. Therefore the input shape is (nb_words, vocabulary_size) with one non-zero value per line. If you multiply this by the embedding weights, you get the output in the shape

\n\n
(nb_words, vocab_size) x (vocab_size, embedding_dim) = (nb_words, embedding_dim)\n
\n\n

So with a simple matrix multiplication you transform all the words in a sample into the corresponding word embeddings.

\n", "system": "" }, { "instruction": "How to check which version of Keras is installed?", "input": "", "output": "

Python library authors put the version number in <module>.__version__. You can print it by running this on the command line:

\n\n
python -c 'import keras; print(keras.__version__)'\n
\n\n

If it's Windows terminal, enclose snippet with double-quotes like below

\n\n
python -c \"import keras; print(keras.__version__)\"\n
\n", "system": "" }, { "instruction": "Keras: Difference between Kernel and Activity regularizers", "input": "", "output": "

The activity regularizer works as a function of the output of the net, and is mostly used to regularize hidden units, while weight_regularizer, as the name says, works on the weights (e.g. making them decay). Basically you can express the regularization loss as a function of the output (activity_regularizer) or of the weights (weight_regularizer).

\n

The new kernel_regularizer replaces weight_regularizer - although it's not very clear from the documentation.

\n

From the definition of kernel_regularizer:

\n
\n

kernel_regularizer: Regularizer function applied to\nthe kernel weights matrix\n(see regularizer).

\n
\n

And activity_regularizer:

\n
\n

activity_regularizer: Regularizer function applied to\nthe output of the layer (its "activation").\n(see regularizer).

\n
\n

Important Edit: Note that there is a bug in the activity_regularizer that was only fixed in version 2.1.4 of Keras (at least with Tensorflow backend). Indeed, in the older versions, the activity regularizer function is applied to the input of the layer, instead of being applied to the output (the actual activations of the layer, as intended). So beware if you are using an older version of Keras (before 2.1.4), activity regularization may probably not work as intended.

\n

You can see the commit on GitHub

\n

Five months ago Fran\u00e7ois Chollet provided a fix to the activity regularizer, that was then included in Keras 2.1.4

\n", "system": "" }, { "instruction": "What's the difference between a bidirectional LSTM and an LSTM?", "input": "", "output": "

LSTM in its core, preserves information from inputs that has already passed through it using the hidden state.

\n\n

Unidirectional LSTM only preserves information of the past because the only inputs it has seen are from the past.

\n\n

Using bidirectional will run your inputs in two ways, one from past to future and one from future to past and what differs this approach from unidirectional is that in the LSTM that runs backwards you preserve information from the future and using the two hidden states combined you are able in any point in time to preserve information from both past and future.

\n\n

What they are suited for is a very complicated question but BiLSTMs show very good results as they can understand context better, I will try to explain through an example.

\n\n

Lets say we try to predict the next word in a sentence, on a high level what a unidirectional LSTM will see is

\n\n
\n

The boys went to ....

\n
\n\n

And will try to predict the next word only by this context, with bidirectional LSTM you will be able to see information further down the road for example

\n\n

Forward LSTM:

\n\n
\n

The boys went to ...

\n
\n\n

Backward LSTM:

\n\n
\n

... and then they got out of the pool

\n
\n\n

You can see that using the information from the future it could be easier for the network to understand what the next word is.

\n", "system": "" }, { "instruction": "Does Any one got "AttributeError: 'str' object has no attribute 'decode' " , while Loading a Keras Saved Model", "input": "", "output": "

For me the solution was downgrading the h5py package (in my case to 2.10.0), apparently putting back only Keras and Tensorflow to the correct versions was not enough.

\n", "system": "" }, { "instruction": "What is validation data used for in a Keras Sequential model?", "input": "", "output": "

If you want to build a solid model you have to follow that specific protocol of splitting your data into three sets: One for training, one for validation and one for final evaluation, which is the test set.

\n

The idea is that you train on your training data and tune your model with the results of metrics (accuracy, loss etc) that you get from your validation set.

\n

Your model doesn't "see" your validation set and isn't in any way trained on it, but you as the architect and master of the hyperparameters tune the model according to this data. Therefore it indirectly influences your model because it directly influences your design decisions. You nudge your model to work well with the validation data and that can possibly bring in a tilt.

\n

Exactly that is the reason you only evaluate your model's final score on data that neither your model nor you yourself has used \u2013 and that is the third chunk of data, your test set.

\n

Only this procedure makes sure you get an unaffected view of your models quality and ability to generalize what is has learned on totally unseen data.

\n", "system": "" }, { "instruction": "Keras: the difference between LSTM dropout and LSTM recurrent dropout", "input": "", "output": "

I suggest taking a look at (the first part of) this paper. Regular dropout is applied on the inputs and/or the outputs, meaning the vertical arrows from x_t and to h_t. In your case, if you add it as an argument to your layer, it will mask the inputs; you can add a Dropout layer after your recurrent layer to mask the outputs as well. Recurrent dropout masks (or \"drops\") the connections between the recurrent units; that would be the horizontal arrows in your picture.

\n\n

This picture is taken from the paper above. On the left, regular dropout on inputs and outputs. On the right, regular dropout PLUS recurrent dropout:

\n\n

\"This

\n\n

(Ignore the colour of the arrows in this case; in the paper they are making a further point of keeping the same dropout masks at each timestep)

\n", "system": "" }, { "instruction": "What is the meaning of axis=-1 in keras.argmax?", "input": "", "output": "

This means that the index that will be returned by argmax will be taken from the last axis.

\n

Your data has some shape (20,19,5,80), I changed the first dimension just to make it clearer. This means:

\n\n

Now, negative numbers work exactly like in python lists, in numpy arrays, etc. Negative numbers represent the inverse order:

\n\n

When you pass the axis parameter to the argmax function, the indices returned will be based on this axis. Your results will lose this specific axes, but keep the others.

\n

See what shape argmax will return for each index:

\n\n", "system": "" }, { "instruction": "What is the difference between loss function and metric in Keras?", "input": "", "output": "

The loss function is used to optimize your model. This is the function that will get minimized by the optimizer.

\n\n

A metric is used to judge the performance of your model. This is only for you to look at and has nothing to do with the optimization process.

\n", "system": "" }, { "instruction": "How to tell Keras stop training based on loss value?", "input": "", "output": "

I found the answer. I looked into Keras sources and find out code for EarlyStopping. I made my own callback, based on it:

\n\n
class EarlyStoppingByLossVal(Callback):\n    def __init__(self, monitor='val_loss', value=0.00001, verbose=0):\n        super(Callback, self).__init__()\n        self.monitor = monitor\n        self.value = value\n        self.verbose = verbose\n\n    def on_epoch_end(self, epoch, logs={}):\n        current = logs.get(self.monitor)\n        if current is None:\n            warnings.warn(\"Early stopping requires %s available!\" % self.monitor, RuntimeWarning)\n\n        if current < self.value:\n            if self.verbose > 0:\n                print(\"Epoch %05d: early stopping THR\" % epoch)\n            self.model.stop_training = True\n
\n\n

And usage:

\n\n
callbacks = [\n    EarlyStoppingByLossVal(monitor='val_loss', value=0.00001, verbose=1),\n    # EarlyStopping(monitor='val_loss', patience=2, verbose=0),\n    ModelCheckpoint(kfold_weights_path, monitor='val_loss', save_best_only=True, verbose=0),\n]\nmodel.fit(X_train.astype('float32'), Y_train, batch_size=batch_size, nb_epoch=nb_epoch,\n      shuffle=True, verbose=1, validation_data=(X_valid, Y_valid),\n      callbacks=callbacks)\n
\n", "system": "" }, { "instruction": "Get class labels from Keras functional model", "input": "", "output": "
y_prob = model.predict(x) \ny_classes = y_prob.argmax(axis=-1)\n
\n\n

As suggested here.

\n", "system": "" }, { "instruction": "Keras - Plot training, validation and test set accuracy", "input": "", "output": "
import keras\nfrom matplotlib import pyplot as plt\nhistory = model1.fit(train_x, train_y,validation_split = 0.1, epochs=50, batch_size=4)\nplt.plot(history.history['accuracy'])\nplt.plot(history.history['val_accuracy'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['train', 'val'], loc='upper left')\nplt.show()\n
\n

\"Model

\n
plt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'val'], loc='upper left')\nplt.show()\n
\n

\"Model

\n", "system": "" }, { "instruction": "How to save final model using keras?", "input": "", "output": "

The model has a save method, which saves all the details necessary to reconstitute the model. An example from the keras documentation:

\n\n
from keras.models import load_model\n\nmodel.save('my_model.h5')  # creates a HDF5 file 'my_model.h5'\ndel model  # deletes the existing model\n\n# returns a compiled model\n# identical to the previous one\nmodel = load_model('my_model.h5')\n
\n", "system": "" }, { "instruction": "How does Keras handle multilabel classification?", "input": "", "output": "

In short

\n

Don't use softmax.

\n

Use sigmoid for activation of your output layer.

\n

Use binary_crossentropy for loss function.

\n

Use predict for evaluation.

\n

Why

\n

In softmax when increasing score for one label, all others are lowered (it's a probability distribution). You don't want that when you have multiple labels.

\n

Complete Code

\n
from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Dropout, Activation\nfrom tensorflow.keras.optimizers import SGD\n\nmodel = Sequential()\nmodel.add(Dense(5000, activation='relu', input_dim=X_train.shape[1]))\nmodel.add(Dropout(0.1))\nmodel.add(Dense(600, activation='relu'))\nmodel.add(Dropout(0.1))\nmodel.add(Dense(y_train.shape[1], activation='sigmoid'))\n\nsgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)\nmodel.compile(loss='binary_crossentropy',\n              optimizer=sgd)\n\nmodel.fit(X_train, y_train, epochs=5, batch_size=2000)\n\npreds = model.predict(X_test)\npreds[preds>=0.5] = 1\npreds[preds<0.5] = 0\n# score = compare preds and y_test\n
\n", "system": "" }, { "instruction": "Keras model.summary() result - Understanding the # of Parameters", "input": "", "output": "

The number of parameters is 7850 because with every hidden unit you have 784 input weights and one weight of connection with bias. This means that every hidden unit gives you 785 parameters. You have 10 units so it sums up to 7850.

\n\n

The role of this additional bias term is really important. It significantly increases the capacity of your model. You can read details e.g. here Role of Bias in Neural Networks.

\n", "system": "" }, { "instruction": "How to add and remove new layers in keras after loading weights?", "input": "", "output": "

You can take the output of the last model and create a new model. The lower layers remains the same.

\n
model.summary()\nmodel.layers.pop()\nmodel.layers.pop()\nmodel.summary()\n\nx = MaxPooling2D()(model.layers[-1].output)\no = Activation('sigmoid', name='loss')(x)\n\nmodel2 = Model(inputs=in_img, outputs=[o])\nmodel2.summary()\n
\n

Check How to use models from keras.applications for transfer learnig?

\n

Update on Edit:

\n

The new error is because you are trying to create the new model on global in_img which is actually not used in the previous model creation.. there you are actually defining a local in_img. So the global in_img is obviously not connected to the upper layers in the symbolic graph. And it has nothing to do with loading weights.

\n

To better resolve this problem you should instead use model.input to reference to the input.

\n
model3 = Model(input=model2.input, output=[o])\n
\n", "system": "" }, { "instruction": "Save and load weights in keras", "input": "", "output": "

Here is a YouTube video that explains exactly what you're wanting to do: Save and load a Keras model

\n\n

There are three different saving methods that Keras makes available. These are described in the video link above (with examples), as well as below.

\n\n

First, the reason you're receiving the error is because you're calling load_model incorrectly.

\n\n

To save and load the weights of the model, you would first use

\n\n
model.save_weights('my_model_weights.h5')\n
\n\n

to save the weights, as you've displayed. To load the weights, you would first need to build your model, and then call load_weights on the model, as in

\n\n
model.load_weights('my_model_weights.h5')\n
\n\n

Another saving technique is model.save(filepath). This save function saves:

\n\n\n\n

To load this saved model, you would use the following:

\n\n
from keras.models import load_model\nnew_model = load_model(filepath)'\n
\n\n

Lastly, model.to_json(), saves only the architecture of the model. To load the architecture, you would use

\n\n
from keras.models import model_from_json\nmodel = model_from_json(json_string)\n
\n", "system": "" }, { "instruction": "ImportError('Could not import PIL.Image. ' working with keras-ternsorflow", "input": "", "output": "

All you need to do is install pillow:

\n\n
pip install pillow\n
\n\n

Then you should be all set. Found this after hours of searching.

\n", "system": "" }, { "instruction": "How does keras handle multiple losses?", "input": "", "output": "

From model documentation:

\n\n
\n

loss: String (name of objective function) or objective function. See losses. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses.

\n \n

...

\n \n

loss_weights: Optional list or dictionary specifying scalar coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model's outputs. If a tensor, it is expected to map output names (strings) to scalar coefficients.

\n
\n\n

So, yes, the final loss will be the \"weighted sum of all individual losses, weighted by the loss_weights coeffiecients\".

\n\n

You can check the code where the loss is calculated.

\n\n
\n

Also, what does it mean during training? Is the loss2 only used to update the weights on layers where y2 comes from? Or is it used for all the model's layers?

\n
\n\n

The weights are updated through backpropagation, so each loss will affect only layers that connect the input to the loss.

\n\n

For example:

\n\n
                        +----+         \n                        > C  |-->loss1 \n                       /+----+         \n                      /                \n                     /                 \n    +----+    +----+/                  \n -->| A  |--->| B  |\\                  \n    +----+    +----+ \\                 \n                      \\                \n                       \\+----+         \n                        > D  |-->loss2 \n                        +----+         \n
\n\n\n", "system": "" }, { "instruction": "ImportError: Failed to import pydot. You must install pydot and graphviz for `pydotprint` to work", "input": "", "output": "

The following commands solved the problem for me

\n\n
    \n
  1. pip install pydot
  2. \n
  3. pip install pydotplus
  4. \n
  5. sudo apt-get install graphviz
  6. \n
\n", "system": "" }, { "instruction": "How to return history of validation loss in Keras", "input": "", "output": "

Just an example started from

\n\n
history = model.fit(X, Y, validation_split=0.33, nb_epoch=150, batch_size=10, verbose=0)\n
\n\n

You can use

\n\n
print(history.history.keys())\n
\n\n

to list all data in history.

\n\n

Then, you can print the history of validation loss like this:

\n\n
print(history.history['val_loss'])\n
\n", "system": "" }, { "instruction": "Keras AttributeError: 'Sequential' object has no attribute 'predict_classes'", "input": "", "output": "

This function was removed in TensorFlow version 2.6.\nAccording to the keras in rstudio reference

\n

update to

\n
predict_x=model.predict(X_test) \nclasses_x=np.argmax(predict_x,axis=1)\n
\n

Or use TensorFlow 2.5.x .

\n

If you are using TensorFlow version 2.5, you will receive the following warning:

\n
\n

tensorflow\\python\\keras\\engine\\sequential.py:455: UserWarning: model.predict_classes() is deprecated and will be removed after 2021-01-01. Please use instead:* np.argmax(model.predict(x), axis=-1), if your model does multi-class classification (e.g. if it uses a softmax last-layer activation).* (model.predict(x) > 0.5).astype("int32"), if your model does binary classification (e.g. if it uses a sigmoid last-layer activation).

\n
\n", "system": "" }, { "instruction": "How to find Number of parameters of a keras model?", "input": "", "output": "

Models and layers have special method for that purpose:

\n\n
model.count_params()\n
\n\n

Also, to get a short summary of each layer dimensions and parameters, you might find useful the following method

\n\n
model.summary()\n
\n", "system": "" }, { "instruction": "Error when checking model input: expected convolution2d_input_1 to have 4 dimensions, but got array with shape (32, 32, 3)", "input": "", "output": "

The input shape you have defined is the shape of a single sample. The model itself expects some array of samples as input (even if its an array of length 1).

\n

Your output really should be 4-d, with the 1st dimension to enumerate the samples. i.e. for a single image you should return a shape of (1, 32, 32, 3).

\n

You can find more information here under "Convolution2D"/"Input shape"

\n

Edit: Based on Danny's comment below, if you want a batch size of 1, you can add the missing dimension using this:

\n
image = np.expand_dims(image, axis=0)\n
\n", "system": "" }, { "instruction": "ImportError: cannot import name 'adam' from 'keras.optimizers'", "input": "", "output": "

There are two types of modules -

\n
    \n
  1. keras
  2. \n
  3. tensorflow.keras
  4. \n
\n

Here we need to use tensorflow.keras

\n

You need to import Adam (With Capital A) from tensorflow - Keras ( Not only Keras).

\n
from tensorflow.keras.optimizers import Adam\n\nfrom tensorflow.keras.optimizers import Adam # - Works\nfrom tensorflow.keras.optimizers import adam # - Does not work\nfrom keras.optimizers import Adam            # - Does not work\nfrom keras.optimizers import adam            # - Does not work\n
\n", "system": "" }, { "instruction": "what is the default kernel_initializer in keras", "input": "", "output": "

Usually, it's glorot_uniform by default. Different layer types might have different default kernel_initializer. When in doubt, just look in the source code. For example, for Dense layer:

\n\n
class Dense(Layer):\n...\n    def __init__(self, units,\n                 activation=None,\n                 use_bias=True,\n                 kernel_initializer='glorot_uniform',\n                 bias_initializer='zeros',\n                 kernel_regularizer=None,\n                 bias_regularizer=None,\n                 activity_regularizer=None,\n                 kernel_constraint=None,\n                 bias_constraint=None,\n                 **kwargs):\n
\n", "system": "" }, { "instruction": "Saving best model in keras", "input": "", "output": "

EarlyStopping and ModelCheckpoint is what you need from Keras documentation.

\n\n

You should set save_best_only=True in ModelCheckpoint. If any other adjustments needed, are trivial.

\n\n

Just to help you more you can see a usage here on Kaggle.

\n\n
\n\n

Adding the code here in case the above Kaggle example link is not available:

\n\n
model = getModel()\nmodel.summary()\n\nbatch_size = 32\n\nearlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')\nmcp_save = ModelCheckpoint('.mdl_wts.hdf5', save_best_only=True, monitor='val_loss', mode='min')\nreduce_lr_loss = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=7, verbose=1, epsilon=1e-4, mode='min')\n\nmodel.fit(Xtr_more, Ytr_more, batch_size=batch_size, epochs=50, verbose=0, callbacks=[earlyStopping, mcp_save, reduce_lr_loss], validation_split=0.25)\n
\n", "system": "" }, { "instruction": "What is the difference between Keras model.evaluate() and model.predict()?", "input": "", "output": "

The model.evaluate function predicts the output for the given input and then computes the metrics function specified in the model.compile and based on y_true and y_pred and returns the computed metric value as the output.

\n\n

The model.predict just returns back the y_pred

\n\n

So if you use model.predict and then compute the metrics yourself, the computed metric value should turn out to be the same as model.evaluate

\n\n

For example, one would use model.predict instead of model.evaluate in evaluating an RNN/ LSTM based models where the output needs to be fed as input in next time step

\n", "system": "" }, { "instruction": "Keras Text Preprocessing - Saving Tokenizer object to file for scoring", "input": "", "output": "

The most common way is to use either pickle or joblib. Here you have an example on how to use pickle in order to save Tokenizer:

\n\n
import pickle\n\n# saving\nwith open('tokenizer.pickle', 'wb') as handle:\n    pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)\n\n# loading\nwith open('tokenizer.pickle', 'rb') as handle:\n    tokenizer = pickle.load(handle)\n
\n", "system": "" }, { "instruction": "How do I install Keras and Theano in Anaconda Python on Windows?", "input": "", "output": "

It is my solution for the same problem

\n\n\n", "system": "" }, { "instruction": "How to log Keras loss output to a file", "input": "", "output": "

You can use CSVLogger callback.

\n\n

as example:

\n\n
from keras.callbacks import CSVLogger\n\ncsv_logger = CSVLogger('log.csv', append=True, separator=';')\nmodel.fit(X_train, Y_train, callbacks=[csv_logger])\n
\n\n

Look at: Keras Callbacks

\n", "system": "" }, { "instruction": "How to compute Receiving Operating Characteristic (ROC) and AUC in keras?", "input": "", "output": "

Due to that you can't calculate ROC&AUC by mini-batches, you can only calculate it on the end of one epoch. There is a solution from jamartinh, I patch the code below for convenience:

\n
from sklearn.metrics import roc_auc_score\nfrom keras.callbacks import Callback\nclass RocCallback(Callback):\n    def __init__(self,training_data,validation_data):\n        self.x = training_data[0]\n        self.y = training_data[1]\n        self.x_val = validation_data[0]\n        self.y_val = validation_data[1]\n\n\n    def on_train_begin(self, logs={}):\n        return\n\n    def on_train_end(self, logs={}):\n        return\n\n    def on_epoch_begin(self, epoch, logs={}):\n        return\n\n    def on_epoch_end(self, epoch, logs={}):\n        y_pred_train = self.model.predict_proba(self.x)\n        roc_train = roc_auc_score(self.y, y_pred_train)\n        y_pred_val = self.model.predict_proba(self.x_val)\n        roc_val = roc_auc_score(self.y_val, y_pred_val)\n        print('\\rroc-auc_train: %s - roc-auc_val: %s' % (str(round(roc_train,4)),str(round(roc_val,4))),end=100*' '+'\\n')\n        return\n\n    def on_batch_begin(self, batch, logs={}):\n        return\n\n    def on_batch_end(self, batch, logs={}):\n        return\n\nroc = RocCallback(training_data=(X_train, y_train),\n                  validation_data=(X_test, y_test))\n\nmodel.fit(X_train, y_train, \n          validation_data=(X_test, y_test),\n          callbacks=[roc])\n
\n

A more hackable way using tf.contrib.metrics.streaming_auc:

\n
import numpy as np\nimport tensorflow as tf\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.datasets import make_classification\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.utils import np_utils\nfrom keras.callbacks import Callback, EarlyStopping\n\n\n# define roc_callback, inspired by https://github.com/keras-team/keras/issues/6050#issuecomment-329996505\ndef auc_roc(y_true, y_pred):\n    # any tensorflow metric\n    value, update_op = tf.contrib.metrics.streaming_auc(y_pred, y_true)\n\n    # find all variables created for this metric\n    metric_vars = [i for i in tf.local_variables() if 'auc_roc' in i.name.split('/')[1]]\n\n    # Add metric variables to GLOBAL_VARIABLES collection.\n    # They will be initialized for new session.\n    for v in metric_vars:\n        tf.add_to_collection(tf.GraphKeys.GLOBAL_VARIABLES, v)\n\n    # force to update metric values\n    with tf.control_dependencies([update_op]):\n        value = tf.identity(value)\n        return value\n\n# generation a small dataset\nN_all = 10000\nN_tr = int(0.7 * N_all)\nN_te = N_all - N_tr\nX, y = make_classification(n_samples=N_all, n_features=20, n_classes=2)\ny = np_utils.to_categorical(y, num_classes=2)\n\nX_train, X_valid = X[:N_tr, :], X[N_tr:, :]\ny_train, y_valid = y[:N_tr, :], y[N_tr:, :]\n\n# model & train\nmodel = Sequential()\nmodel.add(Dense(2, activation="softmax", input_shape=(X.shape[1],)))\n\nmodel.compile(loss='categorical_crossentropy',\n              optimizer='adam',\n              metrics=['accuracy', auc_roc])\n\nmy_callbacks = [EarlyStopping(monitor='auc_roc', patience=300, verbose=1, mode='max')]\n\nmodel.fit(X, y,\n          validation_split=0.3,\n          shuffle=True,\n          batch_size=32, nb_epoch=5, verbose=1,\n          callbacks=my_callbacks)\n\n# # or use independent valid set\n# model.fit(X_train, y_train,\n#           validation_data=(X_valid, y_valid),\n#           batch_size=32, nb_epoch=5, verbose=1,\n#           callbacks=my_callbacks)\n
\n", "system": "" }, { "instruction": "How to use return_sequences option and TimeDistributed layer in Keras?", "input": "", "output": "

The LSTM layer and the TimeDistributed wrapper are two different ways to get the \"many to many\" relationship that you want.

\n\n
    \n
  1. LSTM will eat the words of your sentence one by one, you can chose via \"return_sequence\" to outuput something (the state) at each step (after each word processed) or only output something after the last word has been eaten. So with return_sequence=TRUE, the output will be a sequence of the same length, with return_sequence=FALSE, the output will be just one vector.
  2. \n
  3. TimeDistributed. This wrapper allows you to apply one layer (say Dense for example) to every element of your sequence independently. That layer will have exactly the same weights for every element, it's the same that will be applied to each words and it will, of course, return the sequence of words processed independently.
  4. \n
\n\n

As you can see, the difference between the two is that the LSTM \"propagates the information through the sequence, it will eat one word, update its state and return it or not. Then it will go on with the next word while still carrying information from the previous ones.... as in the TimeDistributed, the words will be processed in the same way on their own, as if they were in silos and the same layer applies to every one of them.

\n\n

So you dont have to use LSTM and TimeDistributed in a row, you can do whatever you want, just keep in mind what each of them do.

\n\n

I hope it's clearer?

\n\n

EDIT:

\n\n

The time distributed, in your case, applies a dense layer to every element that was output by the LSTM.

\n\n

Let's take an example:

\n\n

You have a sequence of n_words words that are embedded in emb_size dimensions. So your input is a 2D tensor of shape (n_words, emb_size)

\n\n

First you apply an LSTM with output dimension = lstm_output and return_sequence = True. The output will still be a squence so it will be a 2D tensor of shape (n_words, lstm_output). \nSo you have n_words vectors of length lstm_output.

\n\n

Now you apply a TimeDistributed dense layer with say 3 dimensions output as parameter of the Dense. So TimeDistributed(Dense(3)). \nThis will apply Dense(3) n_words times, to every vectors of size lstm_output in your sequence independently... they will all become vectors of length 3. Your output will still be a sequence so a 2D tensor, of shape now (n_words, 3).

\n\n

Is it clearer? :-)

\n", "system": "" }, { "instruction": "What is the correct way to change image channel ordering between channels first and channels last?", "input": "", "output": "

I agree with @Qualia 's comment, np.moveaxis(a, source, destination) is easier to understand. This does the job:

\n\n
x = np.zeros((12, 12, 3))\nx.shape\n#yields: \n(12, 12, 3)\n\nx = np.moveaxis(x, -1, 0)\nx.shape\n#yields: \n(3, 12, 12)\n
\n", "system": "" }, { "instruction": "Show progress bar for each epoch during batchwise training in Keras", "input": "", "output": "
    \n
  1. \n
\n
model.fit(X, y, nb_epoch=40, batch_size=32, validation_split=0.2, verbose=1)\n
\n

In the above change to verbose=2, as it is mentioned in the documentation:

\n
\n

verbose: 0 for no logging to stdout, 1 for progress bar logging, 2 for one log line per epoch

\n
\n

It'll show your output as:

\n
Epoch 1/100\n0s - loss: 0.2506 - acc: 0.5750 - val_loss: 0.2501 - val_acc: 0.3750\nEpoch 2/100\n0s - loss: 0.2487 - acc: 0.6250 - val_loss: 0.2498 - val_acc: 0.6250\nEpoch 3/100\n0s - loss: 0.2495 - acc: 0.5750 - val_loss: 0.2496 - val_acc: 0.6250\n.....\n.....\n
\n
    \n
  1. \n
\n

If you want to show a progress bar for completion of epochs, keep verbose=0 (which shuts out logging to stdout) and implement in the following manner:

\n
from time import sleep\nimport sys\n\nepochs = 10\n\nfor e in range(epochs):\n    sys.stdout.write('\\r')\n\n    for X, y in data.next_batch():\n        model.fit(X, y, nb_epoch=1, batch_size=data.batch_size, verbose=0)\n\n    # print loss and accuracy\n\n    # the exact output you're looking for:\n    sys.stdout.write("[%-60s] %d%%" % ('='*(60*(e+1)/10), (100*(e+1)/10)))\n    sys.stdout.flush()\n    sys.stdout.write(", epoch %d"% (e+1))\n    sys.stdout.flush()\n
\n

The output will be as follows:

\n
[============================================================] 100%, epoch 10\n
\n
    \n
  1. \n
\n

If you want to show loss after every n batches, you can use:

\n
out_batch = NBatchLogger(display=1000)\nmodel.fit([X_train_aux,X_train_main],Y_train,batch_size=128,callbacks=[out_batch])\n
\n

Though, I haven't ever tried it before. The above example was taken from this keras github issue: Show Loss Every N Batches #2850

\n

You can also follow a demo of NBatchLogger here:

\n
class NBatchLogger(Callback):\n    def __init__(self, display):\n        self.seen = 0\n        self.display = display\n\n    def on_batch_end(self, batch, logs={}):\n        self.seen += logs.get('size', 0)\n        if self.seen % self.display == 0:\n            metrics_log = ''\n            for k in self.params['metrics']:\n                if k in logs:\n                    val = logs[k]\n                    if abs(val) > 1e-3:\n                        metrics_log += ' - %s: %.4f' % (k, val)\n                    else:\n                        metrics_log += ' - %s: %.4e' % (k, val)\n            print('{}/{} ... {}'.format(self.seen,\n                                        self.params['samples'],\n                                        metrics_log))\n
\n
    \n
  1. \n
\n

You can also use progbar for progress, but it'll print progress batchwise

\n
from keras.utils import generic_utils\n\nprogbar = generic_utils.Progbar(X_train.shape[0])\n\nfor X_batch, Y_batch in datagen.flow(X_train, Y_train):\n    loss, acc = model_test.train([X_batch]*2, Y_batch, accuracy=True)\n    progbar.add(X_batch.shape[0], values=[("train loss", loss), ("acc", acc)])\n
\n", "system": "" }, { "instruction": "Keras accuracy does not change", "input": "", "output": "

The most likely reason is that the optimizer is not suited to your dataset. Here is a list of Keras optimizers from the documentation.

\n\n

I recommend you first try SGD with default parameter values. If it still doesn't work, divide the learning rate by 10. Do that a few times if necessary. If your learning rate reaches 1e-6 and it still doesn't work, then you have another problem.

\n\n

In summary, replace this line:

\n\n
model.compile(loss = \"categorical_crossentropy\", optimizer = \"adam\")\n
\n\n

with this:

\n\n
from keras.optimizers import SGD\nopt = SGD(lr=0.01)\nmodel.compile(loss = \"categorical_crossentropy\", optimizer = opt)\n
\n\n

and change the learning rate a few times if it doesn't work.

\n\n

If it was the problem, you should see the loss getting lower after just a few epochs.

\n", "system": "" }, { "instruction": "How do I create a variable-length input LSTM in Keras?", "input": "", "output": "

I am not clear about the embedding procedure. But still here is a way to implement a variable-length input LSTM. Just do not specify the timespan dimension when building LSTM.

\n\n\n\n
import keras.backend as K\nfrom keras.layers import LSTM, Input\n\nI = Input(shape=(None, 200)) # unknown timespan, fixed feature size\nlstm = LSTM(20)\nf = K.function(inputs=[I], outputs=[lstm(I)])\n\nimport numpy as np\ndata1 = np.random.random(size=(1, 100, 200)) # batch_size = 1, timespan = 100\nprint f([data1])[0].shape\n# (1, 20)\n\ndata2 = np.random.random(size=(1, 314, 200)) # batch_size = 1, timespan = 314\nprint f([data2])[0].shape\n# (1, 20)\n
\n", "system": "" }, { "instruction": "Meaning of validation_steps in Keras Sequential fit_generator parameter list", "input": "", "output": "

The validation generator works exactly like the training generator. You define how many batches it will wield per epoch.

\n\n\n\n

But validation data has absolutely no relation to training data. \nThere is no need to separate validation batches according to training batches (I would even say that there is no point in doing that, unless you have a very specific intention). Also, the total number of samples in training data is not related to the total number of samples in test data.

\n\n

The point of having many batches is just to spare your computer's memory, so you test smaller packs one at a time. Probably, you find a batch size that will fit your memory or expected training time and use that size.

\n\n

That said, Keras gives you a totally free method, so you can determine the training and the validation batches as you wish.

\n\n

Epochs:

\n\n

Ideally, you use all your validation data at once. If you use only part of your validation data, you will get different metrics for each batch, what may make you think that your model got worse or better when it actually didn't, you just measured different validation sets.

\n\n

That's why they suggest validation_steps = total_validation_samples // validation_batch_size.
\nTheoretically, you test your entire data every epoch, as you theoretically should also train your entire data every epoch.

\n\n

So, theorethycally each epoch yields:

\n\n\n\n

Basically, the two vars are: how many batches per epoch you will yield.
\nThis makes sure that at each epoch:

\n\n\n\n

Nevertheless, it's totally up to you how you separate your training and validation data.

\n\n

If you do want to have one different batch per epoch (epochs using less than your entire data), it's ok, just pass steps_per_epoch=1 or validation_steps=1, for instance. The generator is not resetted after each epoch, so the second epoch will take the second batch, and so on, until it loops again to the first batch.

\n\n

I prefer training the entire data per epoch, and if the time is too long, I use a callback that shows the logs at the end of each batch:

\n\n
from keras.callbacks import LambdaCallback\n\ncallbacks = callbacks=[LambdaCallback(on_batch_end=lambda batch,logs:print(logs))]\n
\n\n

Multiprocessing

\n\n

I was never able to use use_multiprocessing=True, it freezes at the start of the first epoch.

\n\n

I've noticed the workers are related to how many batches are preloaded from the generator. If you define max_queue_size=1, you will have exactly workers amount of batches preloaded.

\n\n

They suggest you use keras Sequences when multiprocessing. The sequences work pretty much as a generator, but it keeps track of the order/position of each batch.

\n", "system": "" }, { "instruction": "Keras Conv2D and input channels", "input": "", "output": "

It might be confusing that it is called Conv2D layer (it was to me, which is why I came looking for this answer), because as Nilesh Birari commented:

\n\n
\n

I guess you are missing it's 3D kernel [width, height, depth]. So the result is summation across channels.

\n
\n\n

Perhaps the 2D stems from the fact that the kernel only slides along two dimensions, the third dimension is fixed and determined by the number of input channels (the input depth).

\n\n

For a more elaborate explanation, read https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/

\n\n

I plucked an illustrative image from there:

\n\n

\"kernel

\n", "system": "" }, { "instruction": "How do you create a custom activation function with Keras?", "input": "", "output": "

Credits to this Github issue comment by Ritchie Ng.

\n\n
# Creating a model\nfrom keras.models import Sequential\nfrom keras.layers import Dense\n\n# Custom activation function\nfrom keras.layers import Activation\nfrom keras import backend as K\nfrom keras.utils.generic_utils import get_custom_objects\n\n\ndef custom_activation(x):\n    return (K.sigmoid(x) * 5) - 1\n\nget_custom_objects().update({'custom_activation': Activation(custom_activation)})\n\n# Usage\nmodel = Sequential()\nmodel.add(Dense(32, input_dim=784))\nmodel.add(Activation(custom_activation, name='SpecialActivation'))\nprint(model.summary())\n
\n\n

Please keep in mind that you have to import this function when you save and restore the model. See the note of keras-contrib.

\n", "system": "" }, { "instruction": "Why does prediction needs batch size in Keras?", "input": "", "output": "

Keras can predict multiple values at the same time, like if you input a vector of 100 elements, Keras can compute one prediction for each element, giving 100 outputs. This computation can also be done in batches, defined by the batch_size.

\n\n

This is just in case you cannot fit all the data in the CPU/GPU RAM at the same time and batch processing is needed.

\n", "system": "" }, { "instruction": "What is the difference between an Embedding Layer and a Dense Layer?", "input": "", "output": "

An embedding layer is faster, because it is essentially the equivalent of a dense layer that makes simplifying assumptions.

\n\n

Imagine a word-to-embedding layer with these weights:

\n\n
w = [[0.1, 0.2, 0.3, 0.4],\n     [0.5, 0.6, 0.7, 0.8],\n     [0.9, 0.0, 0.1, 0.2]]\n
\n\n

A Dense layer will treat these like actual weights with which to perform matrix multiplication. An embedding layer will simply treat these weights as a list of vectors, each vector representing one word; the 0th word in the vocabulary is w[0], 1st is w[1], etc.

\n\n
\n\n

For an example, use the weights above and this sentence:

\n\n
[0, 2, 1, 2]\n
\n\n

A naive Dense-based net needs to convert that sentence to a 1-hot encoding

\n\n
[[1, 0, 0],\n [0, 0, 1],\n [0, 1, 0],\n [0, 0, 1]]\n
\n\n

then do a matrix multiplication

\n\n
[[1 * 0.1 + 0 * 0.5 + 0 * 0.9, 1 * 0.2 + 0 * 0.6 + 0 * 0.0, 1 * 0.3 + 0 * 0.7 + 0 * 0.1, 1 * 0.4 + 0 * 0.8 + 0 * 0.2],\n [0 * 0.1 + 0 * 0.5 + 1 * 0.9, 0 * 0.2 + 0 * 0.6 + 1 * 0.0, 0 * 0.3 + 0 * 0.7 + 1 * 0.1, 0 * 0.4 + 0 * 0.8 + 1 * 0.2],\n [0 * 0.1 + 1 * 0.5 + 0 * 0.9, 0 * 0.2 + 1 * 0.6 + 0 * 0.0, 0 * 0.3 + 1 * 0.7 + 0 * 0.1, 0 * 0.4 + 1 * 0.8 + 0 * 0.2],\n [0 * 0.1 + 0 * 0.5 + 1 * 0.9, 0 * 0.2 + 0 * 0.6 + 1 * 0.0, 0 * 0.3 + 0 * 0.7 + 1 * 0.1, 0 * 0.4 + 0 * 0.8 + 1 * 0.2]]\n
\n\n

=

\n\n
[[0.1, 0.2, 0.3, 0.4],\n [0.9, 0.0, 0.1, 0.2],\n [0.5, 0.6, 0.7, 0.8],\n [0.9, 0.0, 0.1, 0.2]]\n
\n\n
\n\n

However, an Embedding layer simply looks at [0, 2, 1, 2] and takes the weights of the layer at indices zero, two, one, and two to immediately get

\n\n
[w[0],\n w[2],\n w[1],\n w[2]]\n
\n\n

=

\n\n
[[0.1, 0.2, 0.3, 0.4],\n [0.9, 0.0, 0.1, 0.2],\n [0.5, 0.6, 0.7, 0.8],\n [0.9, 0.0, 0.1, 0.2]]\n
\n\n

So it's the same result, just obtained in a hopefully faster way.

\n\n
\n\n

The Embedding layer does have limitations:

\n\n\n\n

However, none of those limitations should matter if you just want to convert an integer-encoded word into an embedding.

\n", "system": "" }, { "instruction": "Keras: How to save model and continue training?", "input": "", "output": "

As it's quite difficult to clarify where the problem is, I created a toy example from your code, and it seems to work alright.

\n\n
import numpy as np\nfrom numpy.testing import assert_allclose\nfrom keras.models import Sequential, load_model\nfrom keras.layers import LSTM, Dropout, Dense\nfrom keras.callbacks import ModelCheckpoint\n\nvec_size = 100\nn_units = 10\n\nx_train = np.random.rand(500, 10, vec_size)\ny_train = np.random.rand(500, vec_size)\n\nmodel = Sequential()\nmodel.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))\nmodel.add(Dropout(0.2))\nmodel.add(LSTM(n_units, return_sequences=True))\nmodel.add(Dropout(0.2))\nmodel.add(LSTM(n_units))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(vec_size, activation='linear'))\nmodel.compile(loss='mean_squared_error', optimizer='adam')\n\n# define the checkpoint\nfilepath = \"model.h5\"\ncheckpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')\ncallbacks_list = [checkpoint]\n\n# fit the model\nmodel.fit(x_train, y_train, epochs=5, batch_size=50, callbacks=callbacks_list)\n\n# load the model\nnew_model = load_model(filepath)\nassert_allclose(model.predict(x_train),\n                new_model.predict(x_train),\n                1e-5)\n\n# fit the model\ncheckpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')\ncallbacks_list = [checkpoint]\nnew_model.fit(x_train, y_train, epochs=5, batch_size=50, callbacks=callbacks_list)\n
\n\n

The loss continues to decrease after model loading. (restarting python also gives no problem)

\n\n
Using TensorFlow backend.\nEpoch 1/5\n500/500 [==============================] - 2s - loss: 0.3216     Epoch 00000: loss improved from inf to 0.32163, saving model to model.h5\nEpoch 2/5\n500/500 [==============================] - 0s - loss: 0.2923     Epoch 00001: loss improved from 0.32163 to 0.29234, saving model to model.h5\nEpoch 3/5\n500/500 [==============================] - 0s - loss: 0.2542     Epoch 00002: loss improved from 0.29234 to 0.25415, saving model to model.h5\nEpoch 4/5\n500/500 [==============================] - 0s - loss: 0.2086     Epoch 00003: loss improved from 0.25415 to 0.20860, saving model to model.h5\nEpoch 5/5\n500/500 [==============================] - 0s - loss: 0.1725     Epoch 00004: loss improved from 0.20860 to 0.17249, saving model to model.h5\n\nEpoch 1/5\n500/500 [==============================] - 0s - loss: 0.1454     Epoch 00000: loss improved from inf to 0.14543, saving model to model.h5\nEpoch 2/5\n500/500 [==============================] - 0s - loss: 0.1289     Epoch 00001: loss improved from 0.14543 to 0.12892, saving model to model.h5\nEpoch 3/5\n500/500 [==============================] - 0s - loss: 0.1169     Epoch 00002: loss improved from 0.12892 to 0.11694, saving model to model.h5\nEpoch 4/5\n500/500 [==============================] - 0s - loss: 0.1097     Epoch 00003: loss improved from 0.11694 to 0.10971, saving model to model.h5\nEpoch 5/5\n500/500 [==============================] - 0s - loss: 0.1057     Epoch 00004: loss improved from 0.10971 to 0.10570, saving model to model.h5\n
\n\n

BTW, redefining the model followed by load_weight() definitely won't work, because save_weight() and load_weight() does not save/load the optimizer.

\n", "system": "" }, { "instruction": "When does keras reset an LSTM state?", "input": "", "output": "

Cheking with some tests, I got to the following conclusion, which is according to the documentation and to Nassim's answer:

\n

First, there isn't a single state in a layer, but one state per sample in the batch. There are batch_size parallel states in such a layer.

\n

Stateful=False

\n

In a stateful=False case, all the states are resetted together after each batch.

\n\n

If all those sequences have length (timesteps) = 7, the practical result of these two batches is:

\n
\n

20 individual sequences, each with length 7

\n
\n

None of the sequences are related. But of course: the weights (not the states) will be unique for the layer, and will represent what the layer has learned from all the sequences.

\n\n

Stateful=True

\n

In this case, there is also the same number of parallel states, but they will simply not be resetted at all.

\n\n

The practical result is: the 10 sequences in the second batch are just continuing the 10 sequences of the first batch, as if there had been no interruption at all.

\n

If each sequence has length (timesteps) = 7, then the actual meaning is:

\n
\n

10 individual sequences, each with length 14

\n
\n

When you see that you reached the total length of the sequences, then you call model.reset_states(), meaning you will not continue the previous sequences anymore, now you will start feeding new sequences.

\n", "system": "" }, { "instruction": "What is the difference between Keras' MaxPooling1D and GlobalMaxPooling1D functions?", "input": "", "output": "

Td;lr GlobalMaxPooling1D for temporal data takes the max vector over the\nsteps dimension. So a tensor with shape [10, 4, 10] becomes a tensor with shape [10, 10] after global pooling. MaxPooling1D takes the max over the steps too but constrained to a pool_size for each stride. So a [10, 4, 10] tensor with pooling_size=2 and stride=1 is a [10, 3, 10] tensor after MaxPooling(pooling_size=2, stride=1)

\n

Long answer with graphic aid

\n

Lets say we have a simple sentence with 4 words and we have some vector encoding for the words (like word2vec embeddings). Of course you wont normally max pool over an embedding Tensor but this should do for an example. Also global pooling works across channels but I'll leave that out of this illustration. Finally, things get slightly more complicated with padding but we dont need that here either.

\n

Suppose we have MaxPooling1D(pool_size=2, strides=1). Then

\n
the  [[.7, -0.2, .1]   | pool size is two                  \nboy   [.8, -.3,  .2]   | so look at two words at a time    | stride=1 will\nwill  [.2, -.1,  .4]     and take the max over those       | move the pool down\nlive  [.4  -.4,  .8]]    2 vectors. Here we looking         1 word. Now we look  \n                            'the' and 'boy'.                'boy' and 'will' and \n                                                            take the max.\n
\n

So that will result in a [1, 3, 3] Tensor with the each timestep being the max over a 2D pool. And since we had 3 pools we have effectively downsampled our timesteps from 4 to 3.

\n

However, if we use GlobalMaxPooling1D we will just take the max vector of that sentence (Tensor) which is probably the vector representation of the word 'live'.

\n

Indeed, here is how GlobalMaxPooling1D is defined in keras

\n
class GlobalMaxPooling1D(_GlobalPooling1D):\n    """Global max pooling operation for temporal data.\n    # Input shape\n        3D tensor with shape: `(batch_size, steps, features)`.\n    # Output shape\n        2D tensor with shape:\n        `(batch_size, features)`\n    """\n\n    def call(self, inputs):\n        return K.max(inputs, axis=1)\n
\n

Hopefully that helps, please ask for me to clarify anything.

\n

Additionally here is a example that you can play with:

\n
import numpy as np\nfrom keras.models import Sequential\nfrom keras.layers import Dense, LSTM, GlobalMaxPooling1D, MaxPooling1D\n\nD = np.random.rand(10, 6, 10)\n\nmodel = Sequential()\nmodel.add(LSTM(16, input_shape=(6, 10), return_sequences=True))\nmodel.add(MaxPooling1D(pool_size=2, strides=1))\nmodel.add(LSTM(10))\nmodel.add(Dense(1))\nmodel.compile(loss='binary_crossentropy', optimizer='sgd')\n\n# print the summary to see how the dimension change after the layers are \n# applied\n\nprint(model.summary())\n\n# try a model with GlobalMaxPooling1D now\n\nmodel = Sequential()\nmodel.add(LSTM(16, input_shape=(6, 10), return_sequences=True))\nmodel.add(GlobalMaxPooling1D())\nmodel.add(Dense(1))\nmodel.compile(loss='binary_crossentropy', optimizer='sgd')\n\nprint(model.summary())\n
\n", "system": "" }, { "instruction": "How to switch Backend with Keras (from TensorFlow to Theano)", "input": "", "output": "

Create a .keras (note the . in front) folder in you home directory and put the keras.json file there.

\n\n

For example, /home/DaniPaniz/.keras/keras.json (or ~/.keras/keras.json in short) if you are on a UNIX like system (MacOS X, Linux, *BSD). On Windows you want to create the folder %USERPROFILE%/.keras and put the JSON file there.

\n\n

Alternatively, you can also set the environment variable KERAS_BACKEND:

\n\n
KERAS_BACKEND=theano python mymodel.py\n
\n", "system": "" }, { "instruction": "preprocess_input() method in keras", "input": "", "output": "

Keras works with batches of images. So, the first dimension is used for the number of samples (or images) you have.

\n\n

When you load a single image, you get the shape of one image, which is (size1,size2,channels).

\n\n

In order to create a batch of images, you need an additional dimension: (samples, size1,size2,channels)

\n\n

The preprocess_input function is meant to adequate your image to the format the model requires.

\n\n

Some models use images with values ranging from 0 to 1. Others from -1 to +1. Others use the \"caffe\" style, that is not normalized, but is centered.

\n\n

From the source code, Resnet is using the caffe style.

\n\n

You don't need to worry about the internal details of preprocess_input. But ideally, you should load images with the keras functions for that (so you guarantee that the images you load are compatible with preprocess_input).

\n", "system": "" }, { "instruction": "Received a label value of 1 which is outside the valid range of [0, 1) - Python, Keras", "input": "", "output": "

Range [0, 1) means every number between 0 and 1, excluding 1. So 1 is not a value in the range [0, 1).

\n\n

I am not 100% sure, but the issue could be due to your choice of loss function. For a binary classification, binary_crossentropy should be a better choice.

\n", "system": "" }, { "instruction": "cannot import name 'pad_sequences' from 'keras.preprocessing.sequence'", "input": "", "output": "

Replace:

\n
from keras.preprocessing.sequence import pad_sequences\n
\n

With:

\n
from keras_preprocessing.sequence import pad_sequences\n
\n", "system": "" }, { "instruction": "What values are returned from model.evaluate() in Keras?", "input": "", "output": "

Quoted from evaluate() method documentation:

\n
\n

Returns

\n

Scalar test loss (if the model has a single output and no metrics) or\nlist of scalars (if the model has multiple outputs and/or metrics).\nThe attribute model.metrics_names will give you the display labels\nfor the scalar outputs.

\n
\n

Therefore, you can use metrics_names property of your model to find out what each of those values corresponds to. For example:

\n
from keras import layers\nfrom keras import models\nimport numpy as np\n\ninput_data = layers.Input(shape=(100,)) \nout_1 = layers.Dense(1)(input_data)\nout_2 = layers.Dense(1)(input_data)\n\nmodel = models.Model(input_data, [out_1, out_2])\nmodel.compile(loss='mse', optimizer='adam', metrics=['mae'])\n\nprint(model.metrics_names)\n
\n

outputs the following:

\n
['loss', 'dense_1_loss', 'dense_2_loss', 'dense_1_mean_absolute_error', 'dense_2_mean_absolute_error']\n
\n

which indicates what each of those numbers you see in the output of evaluate method corresponds to.

\n

Further, if you have many layers then those dense_1 and dense_2 names might be a bit ambiguous. To resolve this ambiguity, you can assign names to your layers using name argument of layers (not necessarily on all of them but only on the input and output layers):

\n
# ...\nout_1 = layers.Dense(1, name='output_1')(input_data)\nout_2 = layers.Dense(1, name='output_2')(input_data)\n# ...\n\nprint(model.metrics_names)\n
\n

which outputs a more clear description:

\n
['loss', 'output_1_loss', 'output_2_loss', 'output_1_mean_absolute_error', 'output_2_mean_absolute_error']\n
\n", "system": "" }, { "instruction": "How to calculate precision and recall in Keras", "input": "", "output": "

Python package keras-metrics could be useful for this (I'm the package's author).

\n\n
import keras\nimport keras_metrics\n\nmodel = models.Sequential()\nmodel.add(keras.layers.Dense(1, activation=\"sigmoid\", input_dim=2))\nmodel.add(keras.layers.Dense(1, activation=\"softmax\"))\n\nmodel.compile(optimizer=\"sgd\",\n              loss=\"binary_crossentropy\",\n              metrics=[keras_metrics.precision(), keras_metrics.recall()])\n
\n\n

UPDATE: Starting with Keras version 2.3.0, such metrics as precision, recall, etc. are provided within library distribution package.

\n\n

The usage is the following:

\n\n
model.compile(optimizer=\"sgd\",\n              loss=\"binary_crossentropy\",\n              metrics=[keras.metrics.Precision(), keras.metrics.Recall()])\n
\n", "system": "" }, { "instruction": "What does the standard Keras model output mean? What is epoch and loss in Keras?", "input": "", "output": "

Just to answer the questions more specifically, here's a definition of epoch and loss:

\n\n

Epoch: A full pass over all of your training data.

\n\n

For example, in your view above, you have 1213 observations. So an epoch concludes when it has finished a training pass over all 1213 of your observations.

\n\n

Loss: A scalar value that we attempt to minimize during our training of the model. The lower the loss, the closer our predictions are to the true labels.

\n\n

This is usually Mean Squared Error (MSE) as David Maust said above, or often in Keras, Categorical Cross Entropy

\n\n
\n\n

What you'd expect to see from running fit on your Keras model, is a decrease in loss over n number of epochs. Your training run is rather abnormal, as your loss is actually increasing. This could be due to a learning rate that is too large, which is causing you to overshoot optima.

\n\n

As jaycode mentioned, you will want to look at your model's performance on unseen data, as this is the general use case of Machine Learning.

\n\n

As such, you should include a list of metrics in your compile method, which could look like:

\n\n
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])\n
\n\n

As well as run your model on validation during the fit method, such as:

\n\n
model.fit(data, labels, validation_split=0.2)\n
\n\n
\n\n

There's a lot more to explain, but hopefully this gets you started.

\n", "system": "" }, { "instruction": "How to predict input image using trained model in Keras?", "input": "", "output": "

If someone is still struggling to make predictions on images, here is the optimized code to load the saved model and make predictions:

\n\n
# Modify 'test1.jpg' and 'test2.jpg' to the images you want to predict on\n\nfrom keras.models import load_model\nfrom keras.preprocessing import image\nimport numpy as np\n\n# dimensions of our images\nimg_width, img_height = 320, 240\n\n# load the model we saved\nmodel = load_model('model.h5')\nmodel.compile(loss='binary_crossentropy',\n              optimizer='rmsprop',\n              metrics=['accuracy'])\n\n# predicting images\nimg = image.load_img('test1.jpg', target_size=(img_width, img_height))\nx = image.img_to_array(img)\nx = np.expand_dims(x, axis=0)\n\nimages = np.vstack([x])\nclasses = model.predict_classes(images, batch_size=10)\nprint classes\n\n# predicting multiple images at once\nimg = image.load_img('test2.jpg', target_size=(img_width, img_height))\ny = image.img_to_array(img)\ny = np.expand_dims(y, axis=0)\n\n# pass the list of multiple images np.vstack()\nimages = np.vstack([x, y])\nclasses = model.predict_classes(images, batch_size=10)\n\n# print the classes, the images belong to\nprint classes\nprint classes[0]\nprint classes[0][0]\n
\n", "system": "" }, { "instruction": "What does Keras.io.preprocessing.sequence.pad_sequences do?", "input": "", "output": "

pad_sequences is used to ensure that all sequences in a list have the same length. By default this is done by padding 0 in the beginning of each sequence until each sequence has the same length as the longest sequence.

\n\n

For example

\n\n
>>> pad_sequences([[1, 2, 3], [3, 4, 5, 6], [7, 8]])\narray([[0, 1, 2, 3],\n       [3, 4, 5, 6],\n       [0, 0, 7, 8]], dtype=int32)\n
\n\n

[3, 4, 5, 6] is the longest sequence, so 0 will be padded to the other sequences so their length matches [3, 4, 5, 6].

\n\n

If you rather want to pad to the end of the sequences you can set padding='post'.

\n\n

If you want to specify the maximum length of each sequence you can use the maxlen argument. This will truncate all sequences longer than maxlen.

\n\n
>>> pad_sequences([[1, 2, 3], [3, 4, 5, 6], [7, 8]], maxlen=3)\narray([[1, 2, 3],\n       [4, 5, 6],\n       [0, 7, 8]], dtype=int32)\n
\n\n

Now each sequence have the length 3 instead.

\n\n

According to the documentation one can control the truncation with the pad_sequences. By default truncating is set to pre, which truncates the beginning part of the sequence. If you rather want to truncate the end part of the sequence you can set it to post.

\n", "system": "" }, { "instruction": "Calling "fit" multiple times in Keras", "input": "", "output": "

This question was raised at the Keras github repository in Issue #4446: Quick Question: can a model be fit for multiple times? It was closed by Fran\u00e7ois Chollet with the following statement:

\n\n
\n

Yes, successive calls to fit will incrementally train the model.

\n
\n\n

So, yes, you can call fit multiple times.

\n", "system": "" }, { "instruction": "What is the purpose of the add_loss function in Keras?", "input": "", "output": "

I'll try to answer the original question of why model.add_loss() is being used instead of specifying a custom loss function to model.compile(loss=...).

\n\n

All loss functions in Keras always take two parameters y_true and y_pred. Have a look at the definition of the various standard loss functions available in Keras, they all have these two parameters. They are the 'targets' (the Y variable in many textbooks) and the actual output of the model. Most standard loss functions can be written as an expression of these two tensors. But some more complex losses cannot be written in that way. For your VAE example this is the case because the loss function also depends on additional tensors, namely z_log_var and z_mean, which are not available to the loss functions. Using model.add_loss() has no such restriction and allows you to write much more complex losses that depend on many other tensors, but it has the inconvenience of being more dependent on the model, whereas the standard loss functions work with just any model.

\n\n

(Note: The code proposed in other answers here are somewhat cheating in as much as they just use global variables to sneak in the additional required dependencies. This makes the loss function not a true function in the mathematical sense. I consider this to be much less clean code and I expect it to be more error-prone.)

\n", "system": "" }, { "instruction": "What is the use of train_on_batch() in keras?", "input": "", "output": "

For this question, it's a simple answer from the primary author:

\n\n
\n

With fit_generator, you can use a generator for the validation data as\n well. In general, I would recommend using fit_generator, but using\n train_on_batch works fine too. These methods only exist for the sake of\n convenience in different use cases, there is no \"correct\" method.

\n
\n\n

train_on_batch allows you to expressly update weights based on a collection of samples you provide, without regard to any fixed batch size. You would use this in cases when that is what you want: to train on an explicit collection of samples. You could use that approach to maintain your own iteration over multiple batches of a traditional training set but allowing fit or fit_generator to iterate batches for you is likely simpler.

\n\n

One case when it might be nice to use train_on_batch is for updating a pre-trained model on a single new batch of samples. Suppose you've already trained and deployed a model, and sometime later you've received a new set of training samples previously never used. You could use train_on_batch to directly update the existing model only on those samples. Other methods can do this too, but it is rather explicit to use train_on_batch for this case.

\n\n

Apart from special cases like this (either where you have some pedagogical reason to maintain your own cursor across different training batches, or else for some type of semi-online training update on a special batch), it is probably better to just always use fit (for data that fits in memory) or fit_generator (for streaming batches of data as a generator).

\n", "system": "" }, { "instruction": "ImportError: cannot import name np_utils", "input": "", "output": "

np_utils is a separate package (and a keras dependency - which doesn't get install with it). Can be installed using pip:

\n\n
pip install np_utils\n
\n\n

using - Keras==2.0.6

\n\n
\n\n

Suggestion:\nFor some odd (and still unknown) reasons, even after installing the import

\n\n
from keras.utils.np_utils import to_categorical\n
\n\n

didn't work - I had to restart the notebook (first restart even didn't work), and once it worked, I got stuck again for same import call (gave exception for no module named tensorflow) - as in utils there's another import from . import conv_utils, which required the tensorflow.

\n\n

I did try installing tensorflow using pip install tensorflow gave:

\n\n
\n

Could not find a version that satisfies the requirement tensorflow\n (from versions: ) No matching distribution found for tensorflow

\n
\n\n

even this gist didn't work for me.

\n\n
\n\n

Finally, I installed Anaconda - which have all the scientific packages (numpy, scipy, scikit-learn,..) pre-installed. Installed keras:

\n\n
conda install keras\n
\n\n

Best thing was, it even installed tensorflow as its dependency.

\n", "system": "" }, { "instruction": "What is "epoch" in keras.models.Model.fit?", "input": "", "output": "

Here is how Keras documentation defines an epoch:

\n\n
\n

Epoch: an arbitrary cutoff, generally defined as \"one pass over the entire dataset\", used to separate training into distinct phases, which is useful for logging and periodic evaluation.

\n
\n\n

So, in other words, a number of epochs means how many times you go through your training set.

\n\n

The model is updated each time a batch is processed, which means that it can be updated multiple times during one epoch. If batch_size is set equal to the length of x, then the model will be updated once per epoch.

\n", "system": "" }, { "instruction": "How to determine needed memory of Keras model?", "input": "", "output": "

I created a complete function based on the answer of Fabr\u00edcio Pereira.

\n
def get_model_memory_usage(batch_size, model):\n    import numpy as np\n    try:\n        from keras import backend as K\n    except:\n        from tensorflow.keras import backend as K\n\n    shapes_mem_count = 0\n    internal_model_mem_count = 0\n    for l in model.layers:\n        layer_type = l.__class__.__name__\n        if layer_type == 'Model':\n            internal_model_mem_count += get_model_memory_usage(batch_size, l)\n        single_layer_mem = 1\n        out_shape = l.output_shape\n        if type(out_shape) is list:\n            out_shape = out_shape[0]\n        for s in out_shape:\n            if s is None:\n                continue\n            single_layer_mem *= s\n        shapes_mem_count += single_layer_mem\n\n    trainable_count = np.sum([K.count_params(p) for p in model.trainable_weights])\n    non_trainable_count = np.sum([K.count_params(p) for p in model.non_trainable_weights])\n\n    number_size = 4.0\n    if K.floatx() == 'float16':\n        number_size = 2.0\n    if K.floatx() == 'float64':\n        number_size = 8.0\n\n    total_memory = number_size * (batch_size * shapes_mem_count + trainable_count + non_trainable_count)\n    gbytes = np.round(total_memory / (1024.0 ** 3), 3) + internal_model_mem_count\n    return gbytes\n
\n

UPDATE 2019.10.06: Added support for models which contain other models as layers.

\n

UPDATE 2020.07.17: Function now works correctly in TensorFlow v2.

\n", "system": "" }, { "instruction": "In Keras, how to get the layer name associated with a "Model" object contained in my model?", "input": "", "output": "

The key is to first do .get_layer on the Model object, then do another .get_layer on that specifying the specific vgg16 layer, THEN do .output:

\n\n

layer_output = model.get_layer('vgg16').get_layer('block3_conv1').output

\n", "system": "" }, { "instruction": "Feature Importance Chart in neural network using Keras in Python", "input": "", "output": "

I was recently looking for the answer to this question and found something that was useful for what I was doing and thought it would be helpful to share. I ended up using a permutation importance module from the eli5 package. It most easily works with a scikit-learn model. Luckily, Keras provides a wrapper for sequential models. As shown in the code below, using it is very straightforward.

\n\n
from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor\nimport eli5\nfrom eli5.sklearn import PermutationImportance\n\ndef base_model():\n    model = Sequential()        \n    ...\n    return model\n\nX = ...\ny = ...\n\nmy_model = KerasRegressor(build_fn=base_model, **sk_params)    \nmy_model.fit(X,y)\n\nperm = PermutationImportance(my_model, random_state=1).fit(X,y)\neli5.show_weights(perm, feature_names = X.columns.tolist())\n
\n", "system": "" }, { "instruction": "How can I print the values of Keras tensors?", "input": "", "output": "

Keras' backend has print_tensor which enables you to do this. You can use it this way:

\n\n
import keras.backend as K\n\ndef loss_fn(y_true, y_pred):\n    y_true = K.print_tensor(y_true, message='y_true = ')\n    y_pred = K.print_tensor(y_pred, message='y_pred = ')\n    ...\n
\n\n

The function returns an identical tensor. When that tensor is evaluated, it will print its content, preceded by message.\nFrom the Keras docs:

\n\n
\n

Note that print_tensor returns a new tensor identical to x which should be used in the following code. Otherwise the print operation is not taken into account during evaluation.

\n
\n\n

So, make sure to use the tensor afterwards.

\n", "system": "" }, { "instruction": "Keras not using multiple cores", "input": "", "output": "

Keras and TF themselves don't use whole cores and capacity of CPU! If you are interested in using all 100% of your CPU then the multiprocessing.Pool basically creates a pool of jobs that need doing. The processes will pick up these jobs and run them. When a job is finished, the process will pick up another job from the pool.

\n\n

NB: If you want to just speed up this model, look into GPUs or changing the hyperparameters like batch size and number of neurons (layer size).

\n\n

Here's how you can use multiprocessing to train multiple models at the same time (using processes running in parallel on each separate CPU core of your machine).

\n\n

This answer inspired by @repploved

\n\n
import time\nimport signal\nimport multiprocessing\n\ndef init_worker():\n    ''' Add KeyboardInterrupt exception to mutliprocessing workers '''\n    signal.signal(signal.SIGINT, signal.SIG_IGN)\n\n\ndef train_model(layer_size):\n    '''\n    This code is parallelized and runs on each process\n    It trains a model with different layer sizes (hyperparameters)\n    It saves the model and returns the score (error)\n    '''\n    import keras\n    from keras.models import Sequential\n    from keras.layers import Dense\n\n    print(f'Training a model with layer size {layer_size}')\n\n    # build your model here\n    model_RNN = Sequential()\n    model_RNN.add(Dense(layer_size))\n\n    # fit the model (the bit that takes time!)\n    model_RNN.fit(...)\n\n    # lets demonstrate with a sleep timer\n    time.sleep(5)\n\n    # save trained model to a file\n    model_RNN.save(...)\n\n    # you can also return values eg. the eval score\n    return model_RNN.evaluate(...)\n\n\nnum_workers = 4\nhyperparams = [800, 960, 1100]\n\npool = multiprocessing.Pool(num_workers, init_worker)\n\nscores = pool.map(train_model, hyperparams)\n\nprint(scores)\n
\n\n

Output:

\n\n
Training a model with layer size 800\nTraining a model with layer size 960\nTraining a model with layer size 1100\n[{'size':960,'score':1.0}, {'size':800,'score':1.2}, {'size':1100,'score':0.7}]\n
\n\n

This is easily demonstrated with a time.sleep in the code. You'll see that all 3 processes start the training job, and then they all finish at about the same time. If this was single processed, you'd have to wait for each to finish before starting the next (yawn!).

\n", "system": "" }, { "instruction": "How to understand SpatialDropout1D and when to use it?", "input": "", "output": "

To make it simple, I would first note that so-called feature maps (1D, 2D, etc.) is our regular channels. Let's look at examples:

\n\n
    \n
  1. Dropout(): Let's define 2D input: [[1, 1, 1], [2, 2, 2]]. Dropout will consider every element independently, and may result in something like [[1, 0, 1], [0, 2, 2]]

  2. \n
  3. SpatialDropout1D(): In this case result will look like [[1, 0, 1], [2, 0, 2]]. Notice that 2nd element was zeroed along all channels.

  4. \n
\n", "system": "" }, { "instruction": "Loading model with custom loss + keras", "input": "", "output": "

Yes, there is! custom_objects expects the exact function that you used as loss function (the inner one in your case):

\n\n
model = load_model(modelFile, custom_objects={ 'loss': penalized_loss(noise) })\n
\n\n

Unfortunately keras won't store in the model the value of noise, so you need to feed it to the load_model function manually.

\n", "system": "" }, { "instruction": "What is "metrics" in Keras?", "input": "", "output": "

So in order to understand what metrics are, it's good to start by understanding what a loss function is. Neural networks are mostly trained using gradient methods by an iterative process of decreasing a loss function.

\n\n

A loss is designed to have two crucial properties - first, the smaller its value is, the better your model fits your data, and second, it should be differentiable. So, knowing this, we could fully define what a metric is: it's a function that, given predicted values and ground truth values from examples, provides you with a scalar measure of a \"fitness\" of your model, to the data you have. So, as you may see, a loss function is a metric, but the opposite doesn't always hold. To understand these differences, let's look at the most common examples of metrics usage:

\n\n
    \n
  1. Measure a performance of your network using non-differentiable functions: e.g. accuracy is not differentiable (not even continuous) so you cannot directly optimize your network w.r.t. to it. However, you could use it in order to choose the model with the best accuracy.

  2. \n
  3. Obtain values of different loss functions when your final loss is a combination of a few of them: Let's assume that your loss has a regularization term which measures how your weights differ from 0, and a term which measures the fitness of your model. In this case, you could use metrics in order to have a separate track of how the fitness of your model changes across epochs.

  4. \n
  5. Track a measure with respect to which you don't want to directly optimize your model: so - let's assume that you are solving a multidimensional regression problem where you are mostly concerned about mse, but at the same time you are interested in how a cosine-distance of your solution is changing in time. Then, it's the best to use metrics.

  6. \n
\n\n

I hope that the explanation presented above made obvious what metrics are used for, and why you could use multiple metrics in one model. So now, let's say a few words about mechanics of their usage in keras. There are two ways of computing them while training:

\n\n
    \n
  1. Using metrics defined while compilation: this is what you directly asked. In this case, keras is defining a separate tensor for each metric you defined, to have it computed while training. This usually makes computation faster, but this comes at a cost of additional compilations, and the fact that metrics should be defined in terms of keras.backend functions.

  2. \n
  3. Using keras.callback: It is nice that you can use Callbacks in order to compute your metrics. As each callback has a default attribute of model, you could compute a variety of metrics using model.predict or model parameters while training. Moreover, it makes it possible to compute it, not only epoch-wise, but also batch-wise, or training-wise. This comes at a cost of slower computations, and more complicated logic - as you need to define metrics on your own.

  4. \n
\n\n

Here you can find a list of available metrics, as well as an example on how you could define your own.

\n", "system": "" }, { "instruction": "RMSE/ RMSLE loss function in Keras", "input": "", "output": "

When you use a custom loss, you need to put it without quotes, as you pass the function object, not a string:

\n\n
def root_mean_squared_error(y_true, y_pred):\n        return K.sqrt(K.mean(K.square(y_pred - y_true))) \n\nmodel.compile(optimizer = \"rmsprop\", loss = root_mean_squared_error, \n              metrics =[\"accuracy\"])\n
\n", "system": "" }, { "instruction": "Getting gradient of model output w.r.t weights using Keras", "input": "", "output": "

To get the gradients of model output with respect to weights using Keras you have to use the Keras backend module. I created this simple example to illustrate exactly what to do:

\n
from keras.models import Sequential\nfrom keras.layers import Dense, Activation\nfrom keras import backend as k\n\n\nmodel = Sequential()\nmodel.add(Dense(12, input_dim=8, init='uniform', activation='relu'))\nmodel.add(Dense(8, init='uniform', activation='relu'))\nmodel.add(Dense(1, init='uniform', activation='sigmoid'))\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n
\n

To calculate the gradients we first need to find the output tensor. For the output of the model (what my initial question asked) we simply call model.output. We can also find the gradients of outputs for other layers by calling model.layers[index].output

\n
outputTensor = model.output #Or model.layers[index].output\n
\n

Then we need to choose the variables that are in respect to the gradient.

\n
  listOfVariableTensors = model.trainable_weights\n  #or variableTensors = model.trainable_weights[0]\n
\n

We can now calculate the gradients. It is as easy as the following:

\n
gradients = k.gradients(outputTensor, listOfVariableTensors)\n
\n

To actually run the gradients given an input, we need to use a bit of Tensorflow.

\n
trainingExample = np.random.random((1,8))\nsess = tf.InteractiveSession()\nsess.run(tf.initialize_all_variables())\nevaluated_gradients = sess.run(gradients,feed_dict={model.input:trainingExample})\n
\n

And thats it!

\n", "system": "" }, { "instruction": "Running the Tensorflow 2.0 code gives 'ValueError: tf.function-decorated function tried to create variables on non-first call'. What am I doing wrong?", "input": "", "output": "

As you are trying to use function decorator in TF 2.0, please enable run function eagerly by using below line after importing TensorFlow:

\n
tf.config.experimental_run_functions_eagerly(True)\n
\n

Since the above is deprecated(no longer experimental?), please use the following instead:

\n
tf.config.run_functions_eagerly(True)\n\n
\n

If you want to know more do refer to this link.

\n", "system": "" }, { "instruction": "How do you use Keras LeakyReLU in Python?", "input": "", "output": "

All advanced activations in Keras, including LeakyReLU, are available as layers, and not as activations; therefore, you should use it as such:

\n\n
from keras.layers import LeakyReLU\n\n# instead of cnn_model.add(Activation('relu'))\n# use\ncnn_model.add(LeakyReLU(alpha=0.1))\n
\n", "system": "" }, { "instruction": "Batch normalization instead of input normalization", "input": "", "output": "

You can do it. But the nice thing about batchnorm, in addition to activation distribution stabilization, is that the mean and std deviation are likely migrate as the network learns.

\n\n

Effectively, setting the batchnorm right after the input layer is a fancy data pre-processing step. It helps, sometimes a lot (e.g. in linear regression). But it's easier and more efficient to compute the mean and variance of the whole training sample once, than learn it per-batch. Note that batchnorm isn't free in terms of performance and you shouldn't abuse it.

\n\n
\n", "system": "" }, { "instruction": "How to calculate optimal batch size?", "input": "", "output": "

From the recent Deep Learning book by Goodfellow et al., chapter 8:

\n
\n

Minibatch sizes are generally driven by the following factors:

\n\n
\n

Which in practice usually means "in powers of 2 and the larger the better, provided that the batch fits into your (GPU) memory".

\n

You might want also to consult several good posts here in Stack Exchange:

\n\n

Just keep in mind that the paper by Keskar et al. 'On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima', quoted by several of the posts above, has received some objections by other respectable researchers of the deep learning community.

\n

UPDATE (Dec 2017):

\n

There is a new paper by Yoshua Bengio & team, Three Factors Influencing Minima in SGD (Nov 2017); it is worth reading in the sense that it reports new theoretical & experimental results on the interplay between learning rate and batch size.

\n

UPDATE (Mar 2021):

\n

Of interest here is also another paper from 2018, Revisiting Small Batch Training for Deep Neural Networks (h/t to Nicolas Gervais), which runs contrary to the larger the better advice; quoting from the abstract:

\n
\n

The best performance has been consistently obtained for mini-batch sizes between m=2 and m=32, which contrasts with recent work advocating the use of mini-batch sizes in the thousands.

\n
\n", "system": "" }, { "instruction": "Sklearn StratifiedKFold: ValueError: Supported target types are: ('binary', 'multiclass'). Got 'multilabel-indicator' instead", "input": "", "output": "

keras.utils.to_categorical produces a one-hot encoded class vector, i.e. the multilabel-indicator mentioned in the error message. StratifiedKFold is not designed to work with such input; from the split method docs:

\n\n
\n

split(X, y, groups=None)

\n \n

[...]

\n \n

y : array-like, shape (n_samples,)

\n \n

The target variable for supervised learning problems. Stratification is done based on the y labels.

\n
\n\n

i.e. your y must be a 1-D array of your class labels.

\n\n

Essentially, what you have to do is simply to invert the order of the operations: split first (using your intial y_train), and convert to_categorical afterwards.

\n", "system": "" }, { "instruction": "Keras conv1d layer parameters: filters and kernel_size", "input": "", "output": "

You're right to say that kernel_size defines the size of the sliding window.

\n\n

The filters parameters is just how many different windows you will have. (All of them with the same length, which is kernel_size). How many different results or channels you want to produce.

\n\n

When you use filters=100 and kernel_size=4, you are creating 100 different filters, each of them with length 4. The result will bring 100 different convolutions.

\n\n

Also, each filter has enough parameters to consider all input channels.

\n\n
\n\n

The Conv1D layer expects these dimensions:

\n\n
(batchSize, length, channels)\n
\n\n

I suppose the best way to use it is to have the number of words in the length dimension (as if the words in order formed a sentence), and the channels be the output dimension of the embedding (numbers that define one word).

\n\n

So:

\n\n
batchSize = number of sentences    \nlength = number of words in each sentence   \nchannels = dimension of the embedding's output.    \n
\n\n

The convolutional layer will pass 100 different filters, each filter will slide along the length dimension (word by word, in groups of 4), considering all the channels that define the word.

\n\n

The outputs are shaped as:

\n\n
(number of sentences, 50 words, 100 output dimension or filters)   \n
\n\n

The filters are shaped as:

\n\n
(4 = length, 300 = word vector dimension, 100 output dimension of the convolution)  \n
\n", "system": "" }, { "instruction": "How to calculate F1 Macro in Keras?", "input": "", "output": "

since Keras 2.0 metrics f1, precision, and recall have been removed. The solution is to use a custom metric function:

\n\n
from keras import backend as K\n\ndef f1(y_true, y_pred):\n    def recall(y_true, y_pred):\n        """Recall metric.\n\n        Only computes a batch-wise average of recall.\n\n        Computes the recall, a metric for multi-label classification of\n        how many relevant items are selected.\n        """\n        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))\n        possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))\n        recall = true_positives / (possible_positives + K.epsilon())\n        return recall\n\n    def precision(y_true, y_pred):\n        """Precision metric.\n\n        Only computes a batch-wise average of precision.\n\n        Computes the precision, a metric for multi-label classification of\n        how many selected items are relevant.\n        """\n        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))\n        predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))\n        precision = true_positives / (predicted_positives + K.epsilon())\n        return precision\n    precision = precision(y_true, y_pred)\n    recall = recall(y_true, y_pred)\n    return 2*((precision*recall)/(precision+recall+K.epsilon()))\n\n\nmodel.compile(loss='binary_crossentropy',\n          optimizer= "adam",\n          metrics=[f1])\n
\n

The return line of this function

\n
return 2*((precision*recall)/(precision+recall+K.epsilon()))\n
\n

was modified by adding the constant epsilon, in order to avoid division by 0. Thus NaN will not be computed.

\n", "system": "" }, { "instruction": "how to implement custom metric in keras?", "input": "", "output": "

Here I'm answering to OP's topic question rather than his exact problem. I'm doing this as the question shows up in the top when I google the topic problem.

\n\n

You can implement a custom metric in two ways.

\n\n
    \n
  1. As mentioned in Keras docu.\n

    \n\n
    import keras.backend as K\n\ndef mean_pred(y_true, y_pred):\n    return K.mean(y_pred)\n\nmodel.compile(optimizer='sgd',\n          loss='binary_crossentropy',\n          metrics=['accuracy', mean_pred])\n
    \n\n

    But here you have to remember as mentioned in Marcin Mo\u017cejko's answer that y_true and y_pred are tensors. So in order to correctly calculate the metric you need to use keras.backend functionality. Please look at this SO question for details How to calculate F1 Macro in Keras?

  2. \n
  3. Or you can implement it in a hacky way as mentioned in Keras GH issue. For that you need to use callbacks argument of model.fit.\n

    \n\n
    import keras as keras\nimport numpy as np\nfrom keras.optimizers import SGD\nfrom sklearn.metrics import roc_auc_score\n\nmodel = keras.models.Sequential()\n# ...\nsgd = SGD(lr=0.001, momentum=0.9)\nmodel.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])\n\n\nclass Metrics(keras.callbacks.Callback):\n    def on_train_begin(self, logs={}):\n        self._data = []\n\n    def on_epoch_end(self, batch, logs={}):\n        X_val, y_val = self.validation_data[0], self.validation_data[1]\n        y_predict = np.asarray(model.predict(X_val))\n\n        y_val = np.argmax(y_val, axis=1)\n        y_predict = np.argmax(y_predict, axis=1)\n\n        self._data.append({\n            'val_rocauc': roc_auc_score(y_val, y_predict),\n        })\n        return\n\n    def get_data(self):\n        return self._data\n\nmetrics = Metrics()\nhistory = model.fit(X_train, y_train, epochs=100, validation_data=(X_val, y_val), callbacks=[metrics])\nmetrics.get_data()\n
  4. \n
\n", "system": "" }, { "instruction": "How to insert Keras model into scikit-learn pipeline?", "input": "", "output": "

You need to wrap your Keras model as a Scikit learn model first and then proceed as usual.

\n

Here's a quick example (I've omitted the imports for brevity)

\n
\n

Here is a full blog post with this one and many other examples: Scikit-learn Pipeline Examples

\n
\n
# create a function that returns a model, taking as parameters things you\n# want to verify using cross-valdiation and model selection\ndef create_model(optimizer='adagrad',\n                  kernel_initializer='glorot_uniform', \n                  dropout=0.2):\n    model = Sequential()\n    model.add(Dense(64,activation='relu',kernel_initializer=kernel_initializer))\n    model.add(Dropout(dropout))\n    model.add(Dense(1,activation='sigmoid',kernel_initializer=kernel_initializer))\n\n    model.compile(loss='binary_crossentropy',optimizer=optimizer, metrics=['accuracy'])\n\n    return model\n\n# wrap the model using the function you created\nclf = KerasRegressor(build_fn=create_model,verbose=0)\n\n# just create the pipeline\npipeline = Pipeline([\n    ('clf',clf)\n])\n\npipeline.fit(X_train, y_train)\n
\n", "system": "" }, { "instruction": "Can Keras deal with input images with different size?", "input": "", "output": "

Yes.\nJust change your input shape to shape=(n_channels, None, None).\nWhere n_channels is the number of channels in your input image.

\n\n

I'm using Theano backend though, so if you are using tensorflow you might have to change it to (None,None,n_channels)

\n\n
\n

You should use:

\n \n

input_shape=(1, None, None)

\n \n

None in a shape denotes a variable dimension. Note that not all layers\n will work with such variable dimensions, since some layers require\n shape information (such as Flatten).\n https://github.com/fchollet/keras/issues/1920

\n
\n\n

For example, using keras's functional API your input layer would be:

\n\n

For a RGB dataset

\n\n
inp = Input(shape=(3,None,None))\n
\n\n

For a Gray dataset

\n\n
inp = Input(shape=(1,None,None))\n
\n", "system": "" }, { "instruction": "How to tell which Keras model is better?", "input": "", "output": "
\n

Do I use the \"acc\" (from the training data?) one or the \"val acc\" (from the validation data?) one?

\n
\n\n

If you want to estimate the ability of your model to generalize to new data (which is probably what you want to do), then you look at the validation accuracy, because the validation split contains only data that the model never sees during the training and therefor cannot just memorize.

\n\n

If your training data accuracy (\"acc\") keeps improving while your validation data accuracy (\"val_acc\") gets worse, you are likely in an overfitting situation, i.e. your model starts to basically just memorize the data.

\n\n
\n

There are different accs and val accs for each epoch. How do I know the acc or val acc for my model as a whole? Do I average all of the epochs accs or val accs to find the acc or val acc of the model as a whole?

\n
\n\n

Each epoch is a training run over all of your data. During that run the parameters of your model are adjusted according to your loss function. The result is a set of parameters which have a certain ability to generalize to new data. That ability is reflected by the validation accuracy. So think of every epoch as its own model, which can get better or worse if it is trained for another epoch. Whether it got better or worse is judged by the change in validation accuracy (better = validation accuracy increased). Therefore pick the model of the epoch with the highest validation accuracy. Don't average the accuracies over different epochs, that wouldn't make much sense. You can use the Keras callback ModelCheckpoint to automatically save the model with the highest validation accuracy (see callbacks documentation).

\n\n

The highest accuracy in model 1 is 0.7737 and the highest one in model 2 is 0.7572. Therefore you should view model 1 (at epoch 3) as better. Though it is possible that the 0.7737 was just a random outlier.

\n", "system": "" }, { "instruction": "Shuffling training data with LSTM RNN", "input": "", "output": "

In general, when you shuffle the training data (a set of sequences), you shuffle the order in which sequences are fed to the RNN, you don't shuffle the ordering within individual sequences. This is fine to do when your network is stateless:

\n\n

Stateless Case:

\n\n

The network's memory only persists for the duration of a sequence. Training on sequence B before sequence A doesn't matter because the network's memory state does not persist across sequences.

\n\n

On the other hand:

\n\n

Stateful Case:

\n\n

The network's memory persists across sequences. Here, you cannot blindly shuffle your data and expect optimal results. Sequence A should be fed to the network before sequence B because A comes before B, and we want the network to evaluate sequence B with memory of what was in sequence A.

\n", "system": "" }, { "instruction": "How to calculate prediction uncertainty using Keras?", "input": "", "output": "

If you want to implement dropout approach to measure uncertainty you should do the following:

\n\n
    \n
  1. Implement function which applies dropout also during the test time:

    \n\n
    import keras.backend as K\nf = K.function([model.layers[0].input, K.learning_phase()],\n               [model.layers[-1].output])\n
  2. \n
  3. Use this function as uncertainty predictor e.g. in a following manner:

    \n\n
    def predict_with_uncertainty(f, x, n_iter=10):\n    result = numpy.zeros((n_iter,) + x.shape)\n\n    for iter in range(n_iter):\n        result[iter] = f(x, 1)\n\n    prediction = result.mean(axis=0)\n    uncertainty = result.var(axis=0)\n    return prediction, uncertainty\n
  4. \n
\n\n

Of course you may use any different function to compute uncertainty.

\n", "system": "" }, { "instruction": "Make a deep copy of a keras model in python", "input": "", "output": "

The issue is that model_copy is probably not compiled after cloning. There are in fact a few issues:

\n\n
    \n
  1. Apparently cloning doesn't copy over the loss function, optimizer info, etc.

  2. \n
  3. Before compiling you need to also build the model.

  4. \n
  5. Moreover, cloning doesn't copy weight over

  6. \n
\n\n

So you need a couple extra lines after cloning. For example, for 10 input variables:

\n\n
model_copy= keras.models.clone_model(model1)\nmodel_copy.build((None, 10)) # replace 10 with number of variables in input layer\nmodel_copy.compile(optimizer='rmsprop', loss='categorical_crossentropy')\nmodel_copy.set_weights(model.get_weights())\n\n
\n\n
\n\n

Easier Method 1: Loading weights from file

\n\n

If I understand your question correctly, there is an easier way to do this. You don't need to clone the model, just need to save the old_weights and set the weights at beginning of the loop. You can simply load weights from file as you are doing.

\n\n
for _ in range(10):\n    model1= create_Model()\n    model1.compile(optimizer='rmsprop', loss='categorical_crossentropy')\n    model1.load_weights('my_weights')\n\n    for j in range(0, image_size):\n          model1.fit(sample[j], sample_lbl[j])\n          prediction= model1.predict(sample[j])\n
\n\n
\n\n

Easier Method 2: Loading weights from previous get_weights()

\n\n

Or if you prefer not to load from file:

\n\n
model1= create_Model()\nmodel1.compile(optimizer='rmsprop', loss='categorical_crossentropy')\nmodel1.load_weights('my_weights')\nold_weights = model1.get_weights()\n\nfor _ in range(10):\n    model1.set_weights(old_weights)\n    for j in range(0, image_size):\n          model1.fit(sample[j], sample_lbl[j])\n          prediction= model1.predict(sample[j])\n
\n", "system": "" }, { "instruction": "Get Confusion Matrix From a Keras Multiclass Model", "input": "", "output": "

Your input to confusion_matrix must be an array of int not one hot encodings.

\n\n
matrix = metrics.confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1))\n
\n", "system": "" }, { "instruction": "Restore original text from Keras\u2019s imdb dataset", "input": "", "output": "

Your example is coming out as gibberish, it's much worse than just some missing stop words.

\n\n

If you re-read the docs for the start_char, oov_char, and index_from parameters of the [keras.datasets.imdb.load_data](https://keras.io/datasets/#imdb-movie-reviews-sentiment-classification\n) method they explain what is happening:

\n\n

start_char: int. The start of a sequence will be marked with this character. Set to 1 because 0 is usually the padding character.

\n\n

oov_char: int. words that were cut out because of the num_words or skip_top limit will be replaced with this character.

\n\n

index_from: int. Index actual words with this index and higher.

\n\n

That dictionary you inverted assumes the word indices start from 1.

\n\n

But the indices returned my keras have <START> and <UNKNOWN> as indexes 1 and 2. (And it assumes you will use 0 for <PADDING>).

\n\n

This works for me:

\n\n
import keras\nNUM_WORDS=1000 # only use top 1000 words\nINDEX_FROM=3   # word index offset\n\ntrain,test = keras.datasets.imdb.load_data(num_words=NUM_WORDS, index_from=INDEX_FROM)\ntrain_x,train_y = train\ntest_x,test_y = test\n\nword_to_id = keras.datasets.imdb.get_word_index()\nword_to_id = {k:(v+INDEX_FROM) for k,v in word_to_id.items()}\nword_to_id[\"<PAD>\"] = 0\nword_to_id[\"<START>\"] = 1\nword_to_id[\"<UNK>\"] = 2\nword_to_id[\"<UNUSED>\"] = 3\n\nid_to_word = {value:key for key,value in word_to_id.items()}\nprint(' '.join(id_to_word[id] for id in train_x[0] ))\n
\n\n

The punctuation is missing, but that's all:

\n\n
\"<START> this film was just brilliant casting <UNK> <UNK> story\n direction <UNK> really <UNK> the part they played and you could just\n imagine being there robert <UNK> is an amazing actor ...\"\n
\n", "system": "" }, { "instruction": "Get learning rate of keras model", "input": "", "output": "

Use eval() from keras.backend:

\n\n
import keras.backend as K\nfrom keras.models import Sequential\nfrom keras.layers import Dense\n\nmodel = Sequential()\nmodel.add(Dense(1, input_shape=(1,)))\nmodel.add(Dense(1))\nmodel.compile(loss='mse', optimizer='adam')\n\nprint(K.eval(model.optimizer.lr))\n
\n\n

Output:

\n\n
0.001\n
\n", "system": "" }, { "instruction": "Keras: model.predict for a single image", "input": "", "output": "

Since you trained your model on mini-batches, your input is a tensor of shape [batch_size, image_width, image_height, number_of_channels].

\n\n

When predicting, you have to respect this shape even if you have only one image. Your input should be of shape: [1, image_width, image_height, number_of_channels].

\n\n

You can do this in numpy easily. Let's say you have a single 5x5x3 image:

\n\n
    >>> x = np.random.randint(0,10,(5,5,3))\n    >>> x.shape\n    >>> (5, 5, 3)\n    >>> x = np.expand_dims(x, axis=0)\n    >>> x.shape\n    >>> (1, 5, 5, 3)\n
\n\n

Now x is a rank 4 tensor!

\n", "system": "" }, { "instruction": "Keras: Binary_crossentropy has negative values", "input": "", "output": "

I should have printed out my response variable. The categories were labelled as 1 and 2 instead of 0 and 1, which confused the classifier.

\n", "system": "" }, { "instruction": "Keras: weighted binary crossentropy", "input": "", "output": "

You can use the sklearn module to automatically calculate the weights for each class like this:

\n\n
# Import\nimport numpy as np\nfrom sklearn.utils import class_weight\n\n# Example model\nmodel = Sequential()\nmodel.add(Dense(32, activation='relu', input_dim=100))\nmodel.add(Dense(1, activation='sigmoid'))\n\n# Use binary crossentropy loss\nmodel.compile(optimizer='rmsprop',\n              loss='binary_crossentropy',\n              metrics=['accuracy'])\n\n# Calculate the weights for each class so that we can balance the data\nweights = class_weight.compute_class_weight('balanced',\n                                            np.unique(y_train),\n                                            y_train)\n\n# Add the class weights to the training                                         \nmodel.fit(x_train, y_train, epochs=10, batch_size=32, class_weight=weights)\n
\n\n

Note that the output of the class_weight.compute_class_weight() is an numpy array like this: [2.57569845 0.68250928].

\n", "system": "" }, { "instruction": "How to save model.summary() to file in Keras?", "input": "", "output": "

If you want the formatting of summary you can pass a print function to model.summary() and output to file that way:

\n
def myprint(s):\n    with open('modelsummary.txt','a') as f:\n        print(s, file=f)\n\nmodel.summary(print_fn=myprint)\n
\n

Alternatively, you can serialize it to a json or yaml string with model.to_json() or model.to_yaml() which can be imported back later.

\n

Edit

\n

An more pythonic way to do this in Python 3.4+ is to use contextlib.redirect_stdout

\n
from contextlib import redirect_stdout\n\nwith open('modelsummary.txt', 'w') as f:\n    with redirect_stdout(f):\n        model.summary()\n
\n", "system": "" }, { "instruction": "ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4", "input": "", "output": "

I solved the problem by making

\n\n
\n

input size: (95000,360,1) and\n output size: (95000,22)

\n
\n\n

and changed the input shape to (360,1) in the code where model is defined:

\n\n
model = Sequential()\nmodel.add(LSTM(22, input_shape=(360,1)))\nmodel.add(Dense(22, activation='softmax'))\nmodel.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\nprint(model.summary())\nmodel.fit(ml2_train_input, ml2_train_output_enc, epochs=2, batch_size=500)\n
\n", "system": "" }, { "instruction": "balancing an imbalanced dataset with keras image generator", "input": "", "output": "

This would not be a standard approach to deal with unbalanced data. Nor do I think it would be really justified - you would be significantly changing the distributions of your classes, where the smaller class is now much less variable. The larger class would have rich variation, the smaller would be many similar images with small affine transforms. They would live on a much smaller region in image space than the majority class.

\n\n

The more standard approaches would be:

\n\n\n\n

The first two options are really kind of hacks, which may harm your ability to cope with real world (imbalanced) data. Neither really solves the problem of low variability, which is inherent in having too little data. If application to a real world dataset after model training isn't a concern and you just want good results on the data you have, then these options are fine (and much easier than making generators for a single class).

\n\n

The third option is the right way to go if you have enough data (as an example, the recent paper from Google about detecting diabetic retinopathy achieved high accuracy in a dataset where positive cases were between 10% and 30%).

\n\n

If you truly want to generate a variety of augmented images for one class over another, it would probably be easiest to do it in pre-processing. Take the images of the minority class and generate some augmented versions, and just call it all part of your data. Like I say, this is all pretty hacky.

\n", "system": "" }, { "instruction": "Neural Network LSTM input shape from dataframe", "input": "", "output": "

Below is an example that sets up time series data to train an LSTM. The model output is nonsense as I only set it up to demonstrate how to build the model.

\n\n
import pandas as pd\nimport numpy as np\n# Get some time series data\ndf = pd.read_csv(\"https://raw.githubusercontent.com/plotly/datasets/master/timeseries.csv\")\ndf.head()\n
\n\n

Time series dataframe:

\n\n
Date      A       B       C      D      E      F      G\n0   2008-03-18  24.68  164.93  114.73  26.27  19.21  28.87  63.44\n1   2008-03-19  24.18  164.89  114.75  26.22  19.07  27.76  59.98\n2   2008-03-20  23.99  164.63  115.04  25.78  19.01  27.04  59.61\n3   2008-03-25  24.14  163.92  114.85  27.41  19.61  27.84  59.41\n4   2008-03-26  24.44  163.45  114.84  26.86  19.53  28.02  60.09\n
\n\n

You can build put inputs into a vector and then use pandas .cumsum() function to build the sequence for the time series:

\n\n
# Put your inputs into a single list\ndf['single_input_vector'] = df[input_cols].apply(tuple, axis=1).apply(list)\n# Double-encapsulate list so that you can sum it in the next step and keep time steps as separate elements\ndf['single_input_vector'] = df.single_input_vector.apply(lambda x: [list(x)])\n# Use .cumsum() to include previous row vectors in the current row list of vectors\ndf['cumulative_input_vectors'] = df.single_input_vector.cumsum()\n
\n\n

The output can be set up in a similar way, but it will be a single vector instead of a sequence:

\n\n
# If your output is multi-dimensional, you need to capture those dimensions in one object\n# If your output is a single dimension, this step may be unnecessary\ndf['output_vector'] = df[output_cols].apply(tuple, axis=1).apply(list)\n
\n\n

The input sequences have to be the same length to run them through the model, so you need to pad them to be the max length of your cumulative vectors:

\n\n
# Pad your sequences so they are the same length\nfrom keras.preprocessing.sequence import pad_sequences\n\nmax_sequence_length = df.cumulative_input_vectors.apply(len).max()\n# Save it as a list   \npadded_sequences = pad_sequences(df.cumulative_input_vectors.tolist(), max_sequence_length).tolist()\ndf['padded_input_vectors'] = pd.Series(padded_sequences).apply(np.asarray)\n
\n\n

Training data can be pulled from the dataframe and put into numpy arrays. Note that the input data that comes out of the dataframe will not make a 3D array. It makes an array of arrays, which is not the same thing.

\n\n

You can use hstack and reshape to build a 3D input array.

\n\n
# Extract your training data\nX_train_init = np.asarray(df.padded_input_vectors)\n# Use hstack to and reshape to make the inputs a 3d vector\nX_train = np.hstack(X_train_init).reshape(len(df),max_sequence_length,len(input_cols))\ny_train = np.hstack(np.asarray(df.output_vector)).reshape(len(df),len(output_cols))\n
\n\n

To prove it:

\n\n
>>> print(X_train_init.shape)\n(11,)\n>>> print(X_train.shape)\n(11, 11, 6)\n>>> print(X_train == X_train_init)\nFalse\n
\n\n

Once you have training data you can define the dimensions of your input layer and output layers.

\n\n
# Get your input dimensions\n# Input length is the length for one input sequence (i.e. the number of rows for your sample)\n# Input dim is the number of dimensions in one input vector (i.e. number of input columns)\ninput_length = X_train.shape[1]\ninput_dim = X_train.shape[2]\n# Output dimensions is the shape of a single output vector\n# In this case it's just 1, but it could be more\noutput_dim = len(y_train[0])\n
\n\n

Build the model:

\n\n
from keras.models import Model, Sequential\nfrom keras.layers import LSTM, Dense\n\n# Build the model\nmodel = Sequential()\n\n# I arbitrarily picked the output dimensions as 4\nmodel.add(LSTM(4, input_dim = input_dim, input_length = input_length))\n# The max output value is > 1 so relu is used as final activation.\nmodel.add(Dense(output_dim, activation='relu'))\n\nmodel.compile(loss='mean_squared_error',\n              optimizer='sgd',\n              metrics=['accuracy'])\n
\n\n

Finally you can train the model and save the training log as history:

\n\n
# Set batch_size to 7 to show that it doesn't have to be a factor or multiple of your sample size\nhistory = model.fit(X_train, y_train,\n              batch_size=7, nb_epoch=3,\n              verbose = 1)\n
\n\n

Output:

\n\n
Epoch 1/3\n11/11 [==============================] - 0s - loss: 3498.5756 - acc: 0.0000e+00     \nEpoch 2/3\n11/11 [==============================] - 0s - loss: 3498.5755 - acc: 0.0000e+00     \nEpoch 3/3\n11/11 [==============================] - 0s - loss: 3498.5757 - acc: 0.0000e+00 \n
\n\n

That's it. Use model.predict(X) where X is the same format (other than the number of samples) as X_train in order to make predictions from the model.

\n", "system": "" }, { "instruction": "keras BatchNormalization axis clarification", "input": "", "output": "

The confusion is due to the meaning of axis in np.mean versus in BatchNormalization.

\n\n

When we take the mean along an axis, we collapse that dimension and preserve all other dimensions. In your example data.mean(axis=0) collapses the 0-axis, which is the vertical dimension of data.

\n\n

When we compute a BatchNormalization along an axis, we preserve the dimensions of the array, and we normalize with respect to the mean and standard deviation over every other axis. So in your 2D example BatchNormalization with axis=1 is subtracting the mean for axis=0, just as you expect. This is why bn.moving_mean has shape (4,).

\n", "system": "" }, { "instruction": "How do I mask a loss function in Keras with the TensorFlow backend?", "input": "", "output": "

If there's a mask in your model, it'll be propagated layer-by-layer and eventually applied to the loss. So if you're padding and masking the sequences in a correct way, the loss on the padding placeholders would be ignored.

\n

Some Details:

\n

It's a bit involved to explain the whole process, so I'll just break it down to several steps:

\n
    \n
  1. In compile(), the mask is collected by calling compute_mask() and applied to the loss(es) (irrelevant lines are ignored for clarity).
  2. \n
\n
weighted_losses = [_weighted_masked_objective(fn) for fn in loss_functions]\n\n# Prepare output masks.\nmasks = self.compute_mask(self.inputs, mask=None)\nif masks is None:\n    masks = [None for _ in self.outputs]\nif not isinstance(masks, list):\n    masks = [masks]\n\n# Compute total loss.\ntotal_loss = None\nwith K.name_scope('loss'):\n    for i in range(len(self.outputs)):\n        y_true = self.targets[i]\n        y_pred = self.outputs[i]\n        weighted_loss = weighted_losses[i]\n        sample_weight = sample_weights[i]\n        mask = masks[i]\n        with K.name_scope(self.output_names[i] + '_loss'):\n            output_loss = weighted_loss(y_true, y_pred,\n                                        sample_weight, mask)\n
\n
    \n
  1. Inside Model.compute_mask(), run_internal_graph() is called.
  2. \n
  3. Inside run_internal_graph(), the masks in the model is propagated layer-by-layer from the model's inputs to outputs by calling Layer.compute_mask() for each layer iteratively.
  4. \n
\n

So if you're using a Masking layer in your model, you shouldn't worry about the loss on the padding placeholders. The loss on those entries will be masked out as you've probably already seen inside _weighted_masked_objective().

\n

A Small Example:

\n
max_sentence_length = 5\ncharacter_number = 2\n\ninput_tensor = Input(shape=(max_sentence_length, character_number))\nmasked_input = Masking(mask_value=0)(input_tensor)\noutput = LSTM(3, return_sequences=True)(masked_input)\nmodel = Model(input_tensor, output)\nmodel.compile(loss='mae', optimizer='adam')\n\nX = np.array([[[0, 0], [0, 0], [1, 0], [0, 1], [0, 1]],\n              [[0, 0], [0, 1], [1, 0], [0, 1], [0, 1]]])\ny_true = np.ones((2, max_sentence_length, 3))\ny_pred = model.predict(X)\nprint(y_pred)\n[[[ 0.          0.          0.        ]\n  [ 0.          0.          0.        ]\n  [-0.11980877  0.05803877  0.07880752]\n  [-0.00429189  0.13382857  0.19167568]\n  [ 0.06817091  0.19093043  0.26219055]]\n\n [[ 0.          0.          0.        ]\n  [ 0.0651961   0.10283815  0.12413475]\n  [-0.04420842  0.137494    0.13727818]\n  [ 0.04479844  0.17440712  0.24715884]\n  [ 0.11117355  0.21645413  0.30220413]]]\n\n# See if the loss computed by model.evaluate() is equal to the masked loss\nunmasked_loss = np.abs(1 - y_pred).mean()\nmasked_loss = np.abs(1 - y_pred[y_pred != 0]).mean()\n\nprint(model.evaluate(X, y_true))\n0.881977558136\n\nprint(masked_loss)\n0.881978\n\nprint(unmasked_loss)\n0.917384\n
\n

As can be seen from this example, the loss on the masked part (the zeroes in y_pred) is ignored, and the output of model.evaluate() is equal to masked_loss.

\n
\n

EDIT:

\n

If there's a recurrent layer with return_sequences=False, the mask stop propagates (i.e., the returned mask is None). In RNN.compute_mask():

\n
def compute_mask(self, inputs, mask):\n    if isinstance(mask, list):\n        mask = mask[0]\n    output_mask = mask if self.return_sequences else None\n    if self.return_state:\n        state_mask = [None for _ in self.states]\n        return [output_mask] + state_mask\n    else:\n        return output_mask\n
\n

In your case, if I understand correctly, you want a mask that's based on y_true, and whenever the value of y_true is [0, 0, 1] (the one-hot encoding of "#") you want the loss to be masked. If so, you need to mask the loss values in a somewhat similar way to Daniel's answer.

\n

The main difference is the final average. The average should be taken over the number of unmasked values, which is just K.sum(mask). And also, y_true can be compared to the one-hot encoded vector [0, 0, 1] directly.

\n
def get_loss(mask_value):\n    mask_value = K.variable(mask_value)\n    def masked_categorical_crossentropy(y_true, y_pred):\n        # find out which timesteps in `y_true` are not the padding character '#'\n        mask = K.all(K.equal(y_true, mask_value), axis=-1)\n        mask = 1 - K.cast(mask, K.floatx())\n\n        # multiply categorical_crossentropy with the mask\n        loss = K.categorical_crossentropy(y_true, y_pred) * mask\n\n        # take average w.r.t. the number of unmasked entries\n        return K.sum(loss) / K.sum(mask)\n    return masked_categorical_crossentropy\n\nmasked_categorical_crossentropy = get_loss(np.array([0, 0, 1]))\nmodel = Model(input_tensor, output)\nmodel.compile(loss=masked_categorical_crossentropy, optimizer='adam')\n
\n

The output of the above code then shows that the loss is computed only on the unmasked values:

\n
model.evaluate: 1.08339476585\ntf unmasked_loss: 1.08989\ntf masked_loss: 1.08339\n
\n

The value is different from yours because I've changed the axis argument in tf.reverse from [0,1] to [1].

\n", "system": "" }, { "instruction": "Keras rename model and layers", "input": "", "output": "

For changing names of model.layers with tf.keras you can use the following lines:

\n\n
for layer in model.layers:\n    layer._name = layer.name + str(\"_2\")\n
\n\n

I needed this in a two-input model case and ran into the \"AttributeError: can't set attribute\", too. The thing is that there is an underlying hidden attribute _name, which causes the conflict.

\n", "system": "" }, { "instruction": "How to work with multiple inputs for LSTM in Keras?", "input": "", "output": "

Change

\n\n
a = dataset[i:(i + look_back), 0]\n
\n\n

To

\n\n
a = dataset[i:(i + look_back), :]\n
\n\n

If you want the 3 features in your training data.

\n\n

Then use

\n\n
model.add(LSTM(4, input_shape=(look_back,3)))\n
\n\n

To specify that you have look_back time steps in your sequence, each with 3 features.

\n\n

It should run

\n\n

EDIT :

\n\n

Indeed, sklearn.preprocessing.MinMaxScaler()'s function : inverse_transform() takes an input which has the same shape as the object you fitted. So you need to do something like this :

\n\n
# Get something which has as many features as dataset\ntrainPredict_extended = np.zeros((len(trainPredict),3))\n# Put the predictions there\ntrainPredict_extended[:,2] = trainPredict\n# Inverse transform it and select the 3rd column.\ntrainPredict = scaler.inverse_transform(trainPredict_extended)[:,2]\n
\n\n

I guess you will have other issues like this below in your code but nothing that you can't fix :) the ML part is fixed and you know where the error comes from. Just check the shapes of your objects and try to make them match.

\n", "system": "" }, { "instruction": "Understanding Keras LSTMs: Role of Batch-size and Statefulness", "input": "", "output": "

Let me explain it via an example:

\n\n

So let's say you have the following series: 1,2,3,4,5,6,...,100. You have to decide how many timesteps your lstm will learn, and reshape your data as so. Like below:

\n\n

if you decide time_steps = 5, you have to reshape your time series as a matrix of samples in this way:

\n\n
\n

1,2,3,4,5 -> sample1

\n \n

2,3,4,5,6 -> sample2

\n \n

3,4,5,6,7 -> sample3

\n \n

etc...

\n
\n\n

By doing so, you will end with a matrix of shape (96 samples x 5 timesteps)

\n\n

This matrix should be reshape as (96 x 5 x 1) indicating Keras that you have just 1 time series. If you have more time series in parallel (as in your case), you do the same operation on each time series, so you will end with n matrices (one for each time series) each of shape (96 sample x 5 timesteps).

\n\n

For the sake of argument, let's say you 3 time series. You should concat all of three matrices into one single tensor of shape (96 samples x 5 timeSteps x 3 timeSeries). The first layer of your lstm for this example would be:

\n\n
    model = Sequential()\n    model.add(LSTM(32, input_shape=(5, 3)))\n
\n\n

The 32 as first parameter is totally up to you. It means that at each point in time, your 3 time series will become 32 different variables as output space. It is easier to think each time step as a fully conected layer with 3 inputs and 32 outputs but with a different computation than FC layers.

\n\n

If you are about stacking multiple lstm layers, use return_sequences=True parameter, so the layer will output the whole predicted sequence rather than just the last value.

\n\n

your target shoud be the next value in the series you want to predict.

\n\n

Putting all together, let say you have the following time series:

\n\n

Time series 1 (master): 1,2,3,4,5,6,..., 100

\n\n

Time series 2 (support): 2,4,6,8,10,12,..., 200

\n\n

Time series 3 (support): 3,6,9,12,15,18,..., 300

\n\n

Create the input and target tensor

\n\n
\n
x     -> y\n
\n \n

1,2,3,4,5 -> 6

\n \n

2,3,4,5,6 -> 7

\n \n

3,4,5,6,7 -> 8

\n \n

reformat the rest of time series, but forget about the target since you don't want to predict those series

\n
\n\n

Create your model

\n\n
    model = Sequential()\n    model.add(LSTM(32, input_shape=(5, 3), return_sequences=True)) # Input is shape (5 timesteps x 3 timeseries), output is shape (5 timesteps x 32 variables) because return_sequences  = True\n    model.add(LSTM(8))  # output is shape (1 timesteps x 8 variables) because return_sequences = False\n    model.add(Dense(1, activation='linear')) # output is (1 timestep x 1 output unit on dense layer). It is compare to target variable.\n
\n\n

Compile it and train. A good batch size is 32. Batch size is the size your sample matrices are splited for faster computation. Just don't use statefull

\n", "system": "" }, { "instruction": "How does mask_zero in Keras Embedding layer work?", "input": "", "output": "

Actually, setting mask_zero=True for the Embedding layer does not result in returning a zero vector. Rather, the behavior of the Embedding layer would not change and it would return the embedding vector with index zero. You can confirm this by checking the Embedding layer weights (i.e. in the example you mentioned it would be m.layers[0].get_weights()). Instead, it would affect the behavior of the following layers such as RNN layers.

\n\n

If you inspect the source code of Embedding layer you would see a method called compute_mask:

\n\n
def compute_mask(self, inputs, mask=None):\n    if not self.mask_zero:\n        return None\n    output_mask = K.not_equal(inputs, 0)\n    return output_mask\n
\n\n

This output mask will be passed, as the mask argument, to the following layers which support masking. This has been implemented in the __call__ method of base layer, Layer:

\n\n
# Handle mask propagation.\nprevious_mask = _collect_previous_mask(inputs)\nuser_kwargs = copy.copy(kwargs)\nif not is_all_none(previous_mask):\n    # The previous layer generated a mask.\n    if has_arg(self.call, 'mask'):\n        if 'mask' not in kwargs:\n            # If mask is explicitly passed to __call__,\n            # we should override the default mask.\n            kwargs['mask'] = previous_mask\n
\n\n

And this makes the following layers to ignore (i.e. does not consider in their computations) this inputs steps. Here is a minimal example:

\n\n
data_in = np.array([\n  [1, 0, 2, 0]\n])\n\nx = Input(shape=(4,))\ne = Embedding(5, 5, mask_zero=True)(x)\nrnn = LSTM(3, return_sequences=True)(e)\n\nm = Model(inputs=x, outputs=rnn)\nm.predict(data_in)\n\narray([[[-0.00084503, -0.00413611,  0.00049972],\n        [-0.00084503, -0.00413611,  0.00049972],\n        [-0.00144554, -0.00115775, -0.00293898],\n        [-0.00144554, -0.00115775, -0.00293898]]], dtype=float32)\n
\n\n

As you can see the outputs of the LSTM layer for the second and forth timesteps are the same as the output of first and third timesteps, respectively. This means that those timesteps have been masked.

\n\n

Update: The mask will also be considered when computing the loss since the loss functions are internally augmented to support masking using weighted_masked_objective:

\n\n
def weighted_masked_objective(fn):\n    \"\"\"Adds support for masking and sample-weighting to an objective function.\n    It transforms an objective function `fn(y_true, y_pred)`\n    into a sample-weighted, cost-masked objective function\n    `fn(y_true, y_pred, weights, mask)`.\n    # Arguments\n        fn: The objective function to wrap,\n            with signature `fn(y_true, y_pred)`.\n    # Returns\n        A function with signature `fn(y_true, y_pred, weights, mask)`.\n    \"\"\"\n
\n\n

when compiling the model:

\n\n
weighted_losses = [weighted_masked_objective(fn) for fn in loss_functions]\n
\n\n

You can verify this using the following example:

\n\n
data_in = np.array([[1, 2, 0, 0]])\ndata_out = np.arange(12).reshape(1,4,3)\n\nx = Input(shape=(4,))\ne = Embedding(5, 5, mask_zero=True)(x)\nd = Dense(3)(e)\n\nm = Model(inputs=x, outputs=d)\nm.compile(loss='mse', optimizer='adam')\npreds = m.predict(data_in)\nloss = m.evaluate(data_in, data_out, verbose=0)\nprint(preds)\nprint('Computed Loss:', loss)\n\n[[[ 0.009682    0.02505393 -0.00632722]\n  [ 0.01756451  0.05928303  0.0153951 ]\n  [-0.00146054 -0.02064196 -0.04356086]\n  [-0.00146054 -0.02064196 -0.04356086]]]\nComputed Loss: 9.041069030761719\n\n# verify that only the first two outputs \n# have been considered in the computation of loss\nprint(np.square(preds[0,0:2] - data_out[0,0:2]).mean())\n\n9.041070036475277\n
\n", "system": "" }, { "instruction": "ResNet: 100% accuracy during training, but 33% prediction accuracy with the same data", "input": "", "output": "\n\n

It's because of the batch normalization layers.

\n\n

In training phase, the batch is normalized w.r.t. its mean and variance. However, in testing phase, the batch is normalized w.r.t. the moving average of previously observed mean and variance.

\n\n

Now this is a problem when the number of observed batches is small (e.g., 5 in your example) because in the BatchNormalization layer, by default moving_mean is initialized to be 0 and moving_variance is initialized to be 1.

\n\n

Given also that the default momentum is 0.99, you'll need to update the moving averages quite a lot of times before they converge to the \"real\" mean and variance.

\n\n

That's why the prediction is wrong in the early stage, but is correct after 1000 epochs.

\n\n
\n\n

You can verify it by forcing the BatchNormalization layers to operate in \"training mode\".

\n\n

During training, the accuracy is 1 and the loss is close to zero:

\n\n
model.fit(imgs,y,epochs=5,shuffle=True)\nEpoch 1/5\n3/3 [==============================] - 19s 6s/step - loss: 1.4624 - acc: 0.3333\nEpoch 2/5\n3/3 [==============================] - 0s 63ms/step - loss: 0.6051 - acc: 0.6667\nEpoch 3/5\n3/3 [==============================] - 0s 57ms/step - loss: 0.2168 - acc: 1.0000\nEpoch 4/5\n3/3 [==============================] - 0s 56ms/step - loss: 1.1921e-07 - acc: 1.0000\nEpoch 5/5\n3/3 [==============================] - 0s 53ms/step - loss: 1.1921e-07 - acc: 1.0000\n
\n\n

Now if we evaluate the model, we'll observe high loss and low accuracy because after 5 updates, the moving averages are still pretty close to the initial values:

\n\n
model.evaluate(imgs,y)\n3/3 [==============================] - 3s 890ms/step\n[10.745396614074707, 0.3333333432674408]\n
\n\n

However, if we manually specify the \"learning phase\" variable and let the BatchNormalization layers use the \"real\" batch mean and variance, the result becomes the same as what's observed in fit().

\n\n
sample_weights = np.ones(3)\nlearning_phase = 1  # 1 means \"training\"\nins = [imgs, y, sample_weights, learning_phase]\nmodel.test_function(ins)\n[1.192093e-07, 1.0]\n
\n\n
\n\n

It's also possible to verify it by changing the momentum to a smaller value.

\n\n

For example, by adding momentum=0.01 to all the batch norm layers in ResNet50, the prediction after 20 epochs is:

\n\n
model.predict(imgs)\narray([[  1.00000000e+00,   1.34882026e-08,   3.92139575e-22],\n       [  0.00000000e+00,   1.00000000e+00,   0.00000000e+00],\n       [  8.70998792e-06,   5.31159838e-10,   9.99991298e-01]], dtype=float32)\n
\n", "system": "" }, { "instruction": "TimeDistributed(Dense) vs Dense in Keras - Same number of parameters", "input": "", "output": "

TimeDistributedDense applies a same dense to every time step during GRU/LSTM Cell unrolling. So the error function will be between predicted label sequence and the actual label sequence. (Which is normally the requirement for sequence to sequence labeling problems).

\n

However, with return_sequences=False, Dense layer is applied only once at the last cell. This is normally the case when RNNs are used for classification problem. If return_sequences=True then Dense layer is applied to every timestep just like TimeDistributedDense.

\n

So for as per your models both are same, but if you change your second model to return_sequences=False, then Dense will be applied only at the last cell. Try changing it and the model will throw as error because then the Y will be of size [Batch_size, InputSize], it is no more a sequence to sequence but a full sequence to label problem.

\n
from keras.models import Sequential\nfrom keras.layers import Dense, Activation, TimeDistributed\nfrom keras.layers.recurrent import GRU\nimport numpy as np\n\nInputSize = 15\nMaxLen = 64\nHiddenSize = 16\n\nOutputSize = 8\nn_samples = 1000\n\nmodel1 = Sequential()\nmodel1.add(GRU(HiddenSize, return_sequences=True, input_shape=(MaxLen, InputSize)))\nmodel1.add(TimeDistributed(Dense(OutputSize)))\nmodel1.add(Activation('softmax'))\nmodel1.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n\n\nmodel2 = Sequential()\nmodel2.add(GRU(HiddenSize, return_sequences=True, input_shape=(MaxLen, InputSize)))\nmodel2.add(Dense(OutputSize))\nmodel2.add(Activation('softmax'))\nmodel2.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n\nmodel3 = Sequential()\nmodel3.add(GRU(HiddenSize, return_sequences=False, input_shape=(MaxLen, InputSize)))\nmodel3.add(Dense(OutputSize))\nmodel3.add(Activation('softmax'))\nmodel3.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n\nX = np.random.random([n_samples,MaxLen,InputSize])\nY1 = np.random.random([n_samples,MaxLen,OutputSize])\nY2 = np.random.random([n_samples, OutputSize])\n\nmodel1.fit(X, Y1, batch_size=128, nb_epoch=1)\nmodel2.fit(X, Y1, batch_size=128, nb_epoch=1)\nmodel3.fit(X, Y2, batch_size=128, nb_epoch=1)\n\nprint(model1.summary())\nprint(model2.summary())\nprint(model3.summary())\n
\n

In the above example architecture of model1 and model2 are sample (sequence to sequence models) and model3 is a full sequence to label model.

\n", "system": "" }, { "instruction": "AttributeError: 'module' object has no attribute 'computation'", "input": "", "output": "

Update dask to 0.15.0 will solve the issue

\n\n

update cmd: conda update dask

\n\n

input pip show dask will show follow message

\n\n
Name: dask\nVersion: 0.15.0\nSummary: Parallel PyData with Task Scheduling\nHome-page: http://github.com/dask/dask/\nAuthor: Matthew Rocklin\nAuthor-email: mrocklin@gmail.com\nLicense: BSD\nLocation: c:\\anaconda3\\lib\\site-packages\nRequires:\n
\n", "system": "" }, { "instruction": "Why does Keras LSTM batch size used for prediction have to be the same as fitting batch size?", "input": "", "output": "

Unfortunately what you want to do is impossible with Keras ... I've also struggle a lot of time on this problems and the only way is to dive into the rabbit hole and work with Tensorflow directly to do LSTM rolling prediction.

\n\n

First, to be clear on terminology, batch_size usually means number of sequences that are trained together, and num_steps means how many time steps are trained together. When you mean batch_size=1 and \"just predicting the next value\", I think you meant to predict with num_steps=1.

\n\n

Otherwise, it should be possible to train and predict with batch_size=50 meaning you are training on 50 sequences and make 50 predictions every time step, one for each sequence (meaning training/prediction num_steps=1).

\n\n

However, I think what you mean is that you want to use stateful LSTM to train with num_steps=50 and do prediction with num_steps=1. Theoretically this make senses and should be possible, and it is possible with Tensorflow, just not Keras.

\n\n

The problem: Keras requires an explicit batch size for stateful RNN. You must specify batch_input_shape (batch_size, num_steps, features).

\n\n

The reason: Keras must allocate a fixed-size hidden state vector in the computation graph with shape (batch_size, num_units) in order to persist the values between training batches. On the other hand, when stateful=False, the hidden state vector can be initialized dynamically with zeroes at the beginning of each batch so it does not need to be a fixed size. More details here: http://philipperemy.github.io/keras-stateful-lstm/

\n\n

Possible work around: Train and predict with num_steps=1. Example: https://github.com/keras-team/keras/blob/master/examples/lstm_stateful.py. This might or might not work at all for your problem as the gradient for back propagation will be computed on only one time step. See: https://github.com/fchollet/keras/issues/3669

\n\n

My solution: use Tensorflow: In Tensorflow you can train with batch_size=50, num_steps=100, then do predictions with batch_size=1, num_steps=1. This is possible by creating a different model graph for training and prediction sharing the same RNN weight matrices. See this example for next-character prediction: https://github.com/sherjilozair/char-rnn-tensorflow/blob/master/model.py#L11 and blog post http://karpathy.github.io/2015/05/21/rnn-effectiveness/. Note that one graph can still only work with one specified batch_size, but you can setup multiple model graphs sharing weights in Tensorflow.

\n", "system": "" }, { "instruction": "Keras: class weights (class_weight) for one-hot encoding", "input": "", "output": "

Here's a solution that's a bit shorter and faster. If your one-hot encoded y is a np.array:

\n\n
import numpy as np\nfrom sklearn.utils.class_weight import compute_class_weight\n\ny_integers = np.argmax(y, axis=1)\nclass_weights = compute_class_weight('balanced', np.unique(y_integers), y_integers)\nd_class_weights = dict(enumerate(class_weights))\n
\n\n

d_class_weights can then be passed to class_weight in .fit.

\n", "system": "" }, { "instruction": "How to delete a locally uploaded file on google colab?", "input": "", "output": "

Answer from @Korakot works for a single file and in case, to delete entire folder or subfolders or files

\n\n

use

\n\n

!rm -rf <folder_name>

\n", "system": "" }, { "instruction": "What's the difference between "samples_per_epoch" and "steps_per_epoch" in fit_generator", "input": "", "output": "

When you use fit_generator, the number of samples processed for each epoch is batch_size * steps_per_epochs. From the Keras documentation for fit_generator: https://keras.io/models/sequential/

\n\n
\n

steps_per_epoch: Total number of steps (batches of samples) to yield from generator before declaring one epoch finished and starting the next epoch. It should typically be equal to the number of unique samples of your dataset divided by the batch size.

\n
\n\n

This is different from the behaviour of 'fit', where increasing batch_size typically speeds up things.

\n\n

In conclusion, when you increase batch_size with fit_generator, you should decrease steps_per_epochs by the same factor, if you want training time to stay the same or lower.

\n", "system": "" }, { "instruction": "Add dropout layers between pretrained dense layers in keras", "input": "", "output": "

I found an answer myself by using Keras functional API

\n\n
from keras.applications import VGG16\nfrom keras.layers import Dropout\nfrom keras.models import Model\n\nmodel = VGG16(weights='imagenet')\n\n# Store the fully connected layers\nfc1 = model.layers[-3]\nfc2 = model.layers[-2]\npredictions = model.layers[-1]\n\n# Create the dropout layers\ndropout1 = Dropout(0.85)\ndropout2 = Dropout(0.85)\n\n# Reconnect the layers\nx = dropout1(fc1.output)\nx = fc2(x)\nx = dropout2(x)\npredictors = predictions(x)\n\n# Create a new model\nmodel2 = Model(input=model.input, output=predictors)\n
\n\n

model2 has the dropout layers as I wanted

\n\n
____________________________________________________________________________________________________\nLayer (type)                     Output Shape          Param #     Connected to                     \n====================================================================================================\ninput_1 (InputLayer)             (None, 3, 224, 224)   0                                            \n____________________________________________________________________________________________________\nblock1_conv1 (Convolution2D)     (None, 64, 224, 224)  1792        input_1[0][0]                    \n____________________________________________________________________________________________________\nblock1_conv2 (Convolution2D)     (None, 64, 224, 224)  36928       block1_conv1[0][0]               \n____________________________________________________________________________________________________\nblock1_pool (MaxPooling2D)       (None, 64, 112, 112)  0           block1_conv2[0][0]               \n____________________________________________________________________________________________________\nblock2_conv1 (Convolution2D)     (None, 128, 112, 112) 73856       block1_pool[0][0]                \n____________________________________________________________________________________________________\nblock2_conv2 (Convolution2D)     (None, 128, 112, 112) 147584      block2_conv1[0][0]               \n____________________________________________________________________________________________________\nblock2_pool (MaxPooling2D)       (None, 128, 56, 56)   0           block2_conv2[0][0]               \n____________________________________________________________________________________________________\nblock3_conv1 (Convolution2D)     (None, 256, 56, 56)   295168      block2_pool[0][0]                \n____________________________________________________________________________________________________\nblock3_conv2 (Convolution2D)     (None, 256, 56, 56)   590080      block3_conv1[0][0]               \n____________________________________________________________________________________________________\nblock3_conv3 (Convolution2D)     (None, 256, 56, 56)   590080      block3_conv2[0][0]               \n____________________________________________________________________________________________________\nblock3_pool (MaxPooling2D)       (None, 256, 28, 28)   0           block3_conv3[0][0]               \n____________________________________________________________________________________________________\nblock4_conv1 (Convolution2D)     (None, 512, 28, 28)   1180160     block3_pool[0][0]                \n____________________________________________________________________________________________________\nblock4_conv2 (Convolution2D)     (None, 512, 28, 28)   2359808     block4_conv1[0][0]               \n____________________________________________________________________________________________________\nblock4_conv3 (Convolution2D)     (None, 512, 28, 28)   2359808     block4_conv2[0][0]               \n____________________________________________________________________________________________________\nblock4_pool (MaxPooling2D)       (None, 512, 14, 14)   0           block4_conv3[0][0]               \n____________________________________________________________________________________________________\nblock5_conv1 (Convolution2D)     (None, 512, 14, 14)   2359808     block4_pool[0][0]                \n____________________________________________________________________________________________________\nblock5_conv2 (Convolution2D)     (None, 512, 14, 14)   2359808     block5_conv1[0][0]               \n____________________________________________________________________________________________________\nblock5_conv3 (Convolution2D)     (None, 512, 14, 14)   2359808     block5_conv2[0][0]               \n____________________________________________________________________________________________________\nblock5_pool (MaxPooling2D)       (None, 512, 7, 7)     0           block5_conv3[0][0]               \n____________________________________________________________________________________________________\nflatten (Flatten)                (None, 25088)         0           block5_pool[0][0]                \n____________________________________________________________________________________________________\nfc1 (Dense)                      (None, 4096)          102764544   flatten[0][0]                    \n____________________________________________________________________________________________________\ndropout_1 (Dropout)              (None, 4096)          0           fc1[0][0]                        \n____________________________________________________________________________________________________\nfc2 (Dense)                      (None, 4096)          16781312    dropout_1[0][0]                  \n____________________________________________________________________________________________________\ndropout_2 (Dropout)              (None, 4096)          0           fc2[1][0]                        \n____________________________________________________________________________________________________\npredictions (Dense)              (None, 1000)          4097000     dropout_2[0][0]                  \n====================================================================================================\nTotal params: 138,357,544\nTrainable params: 138,357,544\nNon-trainable params: 0\n____________________________________________________________________________________________________\n
\n", "system": "" }, { "instruction": "Error in "from keras.utils import to_categorical"", "input": "", "output": "

Keras is now fully intregrated into Tensorflow. So, importing only Keras causes error.

\n

It should be imported as:

\n
from tensorflow.keras.utils import to_categorical\n
\n
\n

Avoid importing as:

\n
from keras.utils import to_categorical\n
\n
\n

It is safe to use\nfrom tensorflow.keras. instead of from keras. while importing all the necessary modules.

\n
from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Conv2D, MaxPooling2D,Dropout\nfrom tensorflow.keras.layers import Dense, Activation, Flatten\nfrom tensorflow.keras.utils import to_categorical\nfrom tensorflow.keras import backend as K \nfrom sklearn.model_selection import train_test_split\nfrom tensorflow.keras import callbacks\n
\n", "system": "" }, { "instruction": "How to install h5py (needed for Keras) on MacOS with M1?", "input": "", "output": "

This works for me:

\n
$ brew install hdf5\n$ export HDF5_DIR="$(brew --prefix hdf5)"\n$ pip install --no-binary=h5py h5py\n
\n", "system": "" }, { "instruction": "RNN Regularization: Which Component to Regularize?", "input": "", "output": "

Regularizers that'll work best will depend on your specific architecture, data, and problem; as usual, there isn't a single cut to rule all, but there are do's and (especially) don't's, as well as systematic means of determining what'll work best - via careful introspection and evaluation.

\n\n
\n\n

How does RNN regularization work?

\n\n

Perhaps the best approach to understanding it is information-based. First, see \"How does 'learning' work?\" and \"RNN: Depth vs. Width\". To understand RNN regularization, one must understand how RNN handles information and learns, which the referred sections describe (though not exhaustively). Now to answer the question:

\n\n

RNN regularization's goal is any regularization's goal: maximizing information utility and traversal of the test loss function. The specific methods, however, tend to differ substantially for RNNs per their recurrent nature - and some work better than others; see below.

\n\n
\n\n

RNN regularization methods:

\n\n

WEIGHT DECAY

\n\n
    \n
  1. General: shrinks the norm ('average') of the weight matrix

    \n\n
  2. \n
  3. Recurrent weights: default activation='sigmoid'

    \n\n
  4. \n
  5. Kernel weights: for many-to-one (return_sequences=False), they work similar to weight decay on a typical layer (e.g. Dense). For many-to-many (=True), however, kernel weights operate on every timestep, so pros & cons similar to above will apply.

  6. \n
\n\n

Dropout:

\n\n\n\n

Batch Normalization:

\n\n\n\n

Weight Constraints: set hard upper-bound on weights l2-norm; possible alternative to weight decay.

\n\n

Activity Constraints: don't bother; for most purposes, if you have to manually constrain your outputs, the layer itself is probably learning poorly, and the solution is elsewhere.

\n\n
\n\n

What should I do? Lots of info - so here's some concrete advice:

\n\n
    \n
  1. Weight decay: try 1e-3, 1e-4, see which works better. Do not expect the same value of decay to work for kernel and recurrent_kernel, especially depending on architecture. Check weight shapes - if one is much smaller than the other, apply smaller decay to former

  2. \n
  3. Dropout: try 0.1. If you see improvement, try 0.2 - else, scrap it

  4. \n
  5. Recurrent Dropout: start with 0.2. Improvement --> 0.4. Improvement --> 0.5, else 0.3.

  6. \n
  7. Batch Normalization: try. Improvement --> keep it - else, scrap it.
  8. \n
  9. Recurrent Batchnorm: same as 4.
  10. \n
  11. Weight constraints: advisable w/ higher learning rates to prevent exploding gradients - else use higher weight decay
  12. \n
  13. Activity constraints: probably not (see above)
  14. \n
  15. Residual RNNs: introduce significant changes, along a regularizing effect. See application in IndRNNs
  16. \n
  17. Biases: weight decay and constraints become important upon attaining good backpropagation properties; without them on bias weights but with them on kernel (K) & recurrent kernel (RK) weights, bias weights may grow much faster than the latter two, and dominate the transformation - also leading to exploding gradients. I recommend weight decay / constraint less than or equal to that used on K & RK. Also, with BatchNormalization, you can cannot set use_bias=False as an \"equivalent\"; BN applies to outputs, not hidden-to-hidden transforms.
  18. \n
  19. Zoneout: don't know, never tried, might work - see paper.
  20. \n
  21. Layer Normalization: some report it working better than BN for RNNs - but my application found it otherwise; paper
  22. \n
  23. Data shuffling: is a strong regularizer. Also shuffle batch samples (samples in batch). See relevant info on stateful RNNs
  24. \n
  25. Optimizer: can be an inherent regularizer. Don't have a full explanation, but in my application, Nadam (& NadamW) has stomped every other optimizer - worth trying.
  26. \n
\n\n

Introspection: bottom section on 'learning' isn't worth much without this; don't just look at validation performance and call it a day - inspect the effect that adjusting a regularizer has on weights and activations. Evaluate using info toward bottom & relevant theory.

\n\n

BONUS: weight decay can be powerful - even more powerful when done right; turns out, adaptive optimizers like Adam can harm its effectiveness, as described in this paper. Solution: use AdamW. My Keras/TensorFlow implementation here.

\n\n
\n\n

This is too much! Agreed - welcome to Deep Learning. Two tips here:

\n\n
    \n
  1. Bayesian Optimization; will save you time especially on prohibitively expensive training.
  2. \n
  3. Conv1D(strides > 1), for many timesteps (>1000); slashes dimensionality, shouldn't harm performance (may in fact improve it).
  4. \n
\n\n
\n\n

Introspection Code:

\n\n

Gradients: see this answer

\n\n

Weights: see this answer

\n\n

Weight norm tracking: see this Q & A

\n\n

Activations: see this answer

\n\n

Weights: see_rnn.rnn_histogram or see_rnn.rnn_heatmap (examples in README)

\n\n
\n\n

How does 'learning' work?

\n\n

The 'ultimate truth' of machine learning that is seldom discussed or emphasized is, we don't have access to the function we're trying to optimize - the test loss function. All of our work is with what are approximations of the true loss surface - both the train set and the validation set. This has some critical implications:

\n\n
    \n
  1. Train set global optimum can lie very far from test set global optimum
  2. \n
  3. Local optima are unimportant, and irrelevant:\n\n
  4. \n
\n\n

Further, loss functions are way too complex to analyze directly; a better approach is to localize analysis to individual layers, their weight matrices, and roles relative to the entire NN. Two key considerations are:

\n\n
    \n
  1. Feature extraction capability. Ex: the driving mechanism of deep classifiers is, given input data, to increase class separability with each layer's transformation. Higher quality features will filter out irrelevant information, and deliver what's essential for the output layer (e.g. softmax) to learn a separating hyperplane.

  2. \n
  3. Information utility. Dead neurons, and extreme activations are major culprits of poor information utility; no single neuron should dominate information transfer, and too many neurons shouldn't lie purposeless. Stable activations and weight distributions enable gradient propagation and continued learning.

  4. \n
\n\n
\n\n

How does regularization work? read above first

\n\n

In a nutshell, via maximizing NN's information utility, and improving estimates of the test loss function. Each regularization method is unique, and no two exactly alike - see \"RNN regularizers\".

\n\n
\n\n

RNN: Depth vs. Width: not as simple as \"one is more nonlinear, other works in higher dimensions\".

\n\n\n\n
\n\n

Update:

\n\n

Here is an example of a near-ideal RNN gradient propagation for 170+ timesteps:

\n\n

\n\n

This is rare, and was achieved via careful regularization, normalization, and hyperparameter tuning. Usually we see a large gradient for the last few timesteps, which drops off sharply toward left - as here. Also, since the model is stateful and fits 7 equivalent windows, gradient effectively spans 1200 timesteps.

\n\n

Update 2: see 9 w/ new info & correction

\n\n

Update 3: add weight norms & weights introspection code

\n", "system": "" }, { "instruction": "Error when checking model input: expected lstm_1_input to have 3 dimensions, but got array with shape (339732, 29)", "input": "", "output": "

Setting timesteps = 1 (since, I want one timestep for each instance) and reshaping the X_train and X_test as:

\n\n
import numpy as np\nX_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))\nX_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))\n
\n\n

This worked!

\n", "system": "" }, { "instruction": "How to change Keras backend (where's the json file)?", "input": "", "output": "

After looking at keras sources (this place):

\n\n

Start up your python-binary and do the following

\n\n
import os\nprint(os.path.expanduser('~'))\n# >>> C:\\\\Users\\\\Sascha'  # will look different for different OS\n
\n\n\n", "system": "" }, { "instruction": "Compute class weight function issue in 'sklearn' library when used in 'Keras' classification (Python 3.8, only in VS code)", "input": "", "output": "

After spending a lot of time, this is how I fixed it. I still don't know why but when the code is modified as follows, it works fine. I got the idea after seeing this solution for a similar but slightly different issue.

\n
class_weights = compute_class_weight(\n                                        class_weight = "balanced",\n                                        classes = np.unique(train_classes),\n                                        y = train_classes                                                    \n                                    )\nclass_weights = dict(zip(np.unique(train_classes), class_weights))\nclass_weights\n
\n", "system": "" }, { "instruction": "record the computation time for each epoch in Keras during model.fit()", "input": "", "output": "

Try the following callback:

\n\n
class TimeHistory(keras.callbacks.Callback):\n    def on_train_begin(self, logs={}):\n        self.times = []\n\n    def on_epoch_begin(self, batch, logs={}):\n        self.epoch_time_start = time.time()\n\n    def on_epoch_end(self, batch, logs={}):\n        self.times.append(time.time() - self.epoch_time_start)\n
\n\n

Then:

\n\n
time_callback = TimeHistory()\nmodel.fit(..., callbacks=[..., time_callback],...)\ntimes = time_callback.times\n
\n\n

In this case times should store the epoch computation times.

\n", "system": "" }, { "instruction": "Error "Keras requires TensorFlow 2.2 or higher"", "input": "", "output": "

I had the same issue caused by last keras release,what i remember did():

\n\n

1-Upgrade tensorflow:

\n\n
  pip install --user --upgrade tensorflow-gpu\n
\n\n

(there might be some missing packages, just pip install them)

\n\n

2-Upgrade Tensorboard

\n\n
pip install --user --upgrade tensorboard\n
\n\n

(there might be some missing packages, just pip install them)

\n\n

3-Downgrade Keras

\n\n
pip install keras==2.3.1\n
\n\n

(latest version working for me)

\n\n

4-Downgrade tensorflow-gpu

\n\n
pip install --user --upgrade tensorflow-gpu==1.14.0\n
\n\n

(latest version working for me)

\n\n

Let me know if worked!

\n\n
\n\n

Anaconda 2020.02

\n\n

Python 3.7

\n\n

CPU i3 8100

\n\n

OS Windows 10 64

\n\n

Nvidia GPU GTX1050TI

\n\n

CUDA 10.1

\n", "system": "" }, { "instruction": "Early stopping with Keras and sklearn GridSearchCV cross-validation", "input": "", "output": "

[Answer after the question was edited & clarified:]

\n\n

Before rushing into implementation issues, it is always a good practice to take some time to think about the methodology and the task itself; arguably, intermingling early stopping with the cross validation procedure is not a good idea.

\n\n

Let's make up an example to highlight the argument.

\n\n

Suppose that you indeed use early stopping with 100 epochs, and 5-fold cross validation (CV) for hyperparameter selection. Suppose also that you end up with a hyperparameter set X giving best performance, say 89.3% binary classification accuracy.

\n\n

Now suppose that your second-best hyperparameter set, Y, gives 89.2% accuracy. Examining closely the individual CV folds, you see that, for your best case X, 3 out of the 5 CV folds exhausted the max 100 epochs, while in the other 2 early stopping kicked in, say in 95 and 93 epochs respectively.

\n\n

Now imagine that, examining your second-best set Y, you see that again 3 out of the 5 CV folds exhausted the 100 epochs, while the other 2 both stopped early enough at ~ 80 epochs.

\n\n

What would be your conclusion from such an experiment?

\n\n

Arguably, you would have found yourself in an inconclusive situation; further experiments might reveal which is actually the best hyperparameter set, provided of course that you would have thought to look into these details of the results in the first place. And needless to say, if all this was automated through a callback, you might have missed your best model despite the fact that you would have actually tried it.

\n\n
\n\n

The whole CV idea is implicitly based on the \"all other being equal\" argument (which of course is never true in practice, only approximated in the best possible way). If you feel that the number of epochs should be a hyperparameter, just include it explicitly in your CV as such, rather than inserting it through the back door of early stopping, thus possibly compromising the whole process (not to mention that early stopping has itself a hyperparameter, patience).

\n\n

Not intermingling these two techniques doesn't mean of course that you cannot use them sequentially: once you have obtained your best hyperparameters through CV, you can always employ early stopping when fitting the model in your whole training set (provided of course that you do have a separate validation set).

\n\n
\n\n

The field of deep neural nets is still (very) young, and it is true that it has yet to establish its \"best practice\" guidelines; add the fact that, thanks to an amazing community, there are all sort of tools available in open source implementations, and you can easily find yourself into the (admittedly tempting) position of mixing things up just because they happen to be available. I am not necessarily saying that this is what you are attempting to do here - I am just urging for more caution when combining ideas that may have not been designed to work along together...

\n", "system": "" }, { "instruction": "Keras flowFromDirectory get file names as they are being generated", "input": "", "output": "

Yes is it possible, at least with version 2.0.4 (don't know about earlier version).

\n\n

The instance of ImageDataGenerator().flow_from_directory(...) has an attribute with filenames which is a list of all the files in the order the generator yields them and also an attribute batch_index. So you can do it like this:

\n\n
datagen = ImageDataGenerator()\ngen = datagen.flow_from_directory(...)\n
\n\n

And every iteration on generator you can get the corresponding filenames like this:

\n\n
for i in gen:\n    idx = (gen.batch_index - 1) * gen.batch_size\n    print(gen.filenames[idx : idx + gen.batch_size])\n
\n\n

This will give you the filenames of the images in the current batch.

\n", "system": "" }, { "instruction": "How to use advanced activation layers in Keras?", "input": "", "output": "

The correct way to use the advanced activations like PReLU is to use it with add() method and not wrapping it using Activation class. Example:

\n\n
model = Sequential()\nact = keras.layers.advanced_activations.PReLU(init='zero', weights=None)\nmodel.add(Dense(64, input_dim=14, init='uniform'))\nmodel.add(act)\n
\n", "system": "" }, { "instruction": "How does binary cross entropy loss work on autoencoders?", "input": "", "output": "

In the context of autoencoders the input and output of the model is the same. So, if the input values are in the range [0,1] then it is acceptable to use sigmoid as the activation function of last layer. Otherwise, you need to use an appropriate activation function for the last layer (e.g. linear which is the default one).

\n\n

As for the loss function, it comes back to the values of input data again. If the input data are only between zeros and ones (and not the values between them), then binary_crossentropy is acceptable as the loss function. Otherwise, you need to use other loss functions such as 'mse' (i.e. mean squared error) or 'mae' (i.e. mean absolute error). Note that in the case of input values in range [0,1] you can use binary_crossentropy, as it is usually used (e.g. Keras autoencoder tutorial and this paper). However, don't expect that the loss value becomes zero since binary_crossentropy does not return zero when both prediction and label are not either zero or one (no matter they are equal or not). Here is a video from Hugo Larochelle where he explains the loss functions used in autoencoders (the part about using binary_crossentropy with inputs in range [0,1] starts at 5:30)

\n\n

Concretely, in your example, you are using the MNIST dataset. So by default the values of MNIST are integers in the range [0, 255]. Usually you need to normalize them first:

\n\n
trainX = trainX.astype('float32')\ntrainX /= 255.\n
\n\n

Now the values would be in range [0,1]. So sigmoid can be used as the activation function and either of binary_crossentropy or mse as the loss function.

\n\n
\n\n

Why binary_crossentropy can be used even when the true label values (i.e. ground-truth) are in the range [0,1]?

\n\n

Note that we are trying to minimize the loss function in training. So if the loss function we have used reaches its minimum value (which may not be necessarily equal to zero) when prediction is equal to true label, then it is an acceptable choice. Let's verify this is the case for binray cross-entropy which is defined as follows:

\n\n
bce_loss = -y*log(p) - (1-y)*log(1-p)\n
\n\n

where y is the true label and p is the predicted value. Let's consider y as fixed and see what value of p minimizes this function: we need to take the derivative with respect to p (I have assumed the log is the natural logarithm function for simplicity of calculations):

\n\n
bce_loss_derivative = -y*(1/p) - (1-y)*(-1/(1-p)) = 0 =>\n                      -y/p + (1-y)/(1-p) = 0 =>\n                      -y*(1-p) + (1-y)*p = 0 =>\n                      -y + y*p + p - y*p = 0 =>\n                       p - y = 0 => y = p\n
\n\n

As you can see binary cross-entropy have the minimum value when y=p, i.e. when the true label is equal to predicted label and this is exactly what we are looking for.

\n", "system": "" }, { "instruction": "OpenBLAS blas_thread_init: pthread_create: Resource temporarily unavailable", "input": "", "output": "

I had this problem running numpy on an ubuntu server. I got all of the following errors, depending on whether I tried to import numpy in a shell or running my django app:

\n\n\n\n

I'm posting this answer since it drove me crazy. What helped for me was to add:

\n\n
import os\nos.environ['OPENBLAS_NUM_THREADS'] = '1'\n
\n\n

before

\n\n
import numpy as np\n
\n\n

I guess the server had some limit for the amount of threads it allows(?). Hope it helps someone!

\n", "system": "" }, { "instruction": "How to get the output shape of a layer in Keras?", "input": "", "output": "

You can get the output shape of a layer by layer.output_shape.

\n\n
for layer in model.layers:\n    print(layer.output_shape)\n
\n\n

Gives you:

\n\n
(None, None, 64, 64, 40)\n(None, None, 64, 64, 40)\n(None, None, 64, 64, 40)\n(None, None, 64, 64, 40)\n(None, None, 64, 64, 40)\n(None, None, 64, 64, 40)\n(None, None, 64, 64, 40)\n(None, None, 64, 64, 40)\n(None, None, 64, 64, 1)\n
\n\n

Alternatively you can pretty print the model using model.summary:

\n\n
model.summary()\n
\n\n

Gives you the details about the number of parameters and output shapes of each layer and an overall model structure in a pretty format:

\n\n
_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\nconv_lst_m2d_1 (ConvLSTM2D)  (None, None, 64, 64, 40)  59200     \n_________________________________________________________________\nbatch_normalization_1 (Batch (None, None, 64, 64, 40)  160       \n_________________________________________________________________\nconv_lst_m2d_2 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    \n_________________________________________________________________\nbatch_normalization_2 (Batch (None, None, 64, 64, 40)  160       \n_________________________________________________________________\nconv_lst_m2d_3 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    \n_________________________________________________________________\nbatch_normalization_3 (Batch (None, None, 64, 64, 40)  160       \n_________________________________________________________________\nconv_lst_m2d_4 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    \n_________________________________________________________________\nbatch_normalization_4 (Batch (None, None, 64, 64, 40)  160       \n_________________________________________________________________\nconv3d_1 (Conv3D)            (None, None, 64, 64, 1)   1081      \n=================================================================\nTotal params: 407,001\nTrainable params: 406,681\nNon-trainable params: 320\n_________________________________________________________________\n
\n\n

If you want to access information about a specific layer only, you can use name argument when constructing that layer and then call like this:

\n\n
...\nmodel.add(ConvLSTM2D(..., name='conv3d_0'))\n...\n\nmodel.get_layer('conv3d_0')\n
\n\n
\n\n

EDIT: For reference sake it will always be same as layer.output_shape and please don't actually use Lambda or custom layers for this. But you can use Lambda layer to echo the shape of a passing tensor.

\n\n
...\ndef print_tensor_shape(x):\n    print(x.shape)\n    return x\nmodel.add(Lambda(print_tensor_shape))\n...\n
\n\n

Or write a custom layer and print the shape of the tensor on call().

\n\n
class echo_layer(Layer):\n...\n    def call(self, x):\n        print(x.shape)\n        return x\n...\n\nmodel.add(echo_layer())\n
\n", "system": "" }, { "instruction": "ImportError: cannot import name '_obtain_input_shape' from keras", "input": "", "output": "

You don't have to downgrade Keras 2.2.2.

\n\n

In Keras 2.2.2 there is no _obtain_input_shape method in the keras.applications.imagenet_utils module. You can find it under keras-applications with the modul name keras_applications (underscore).

\n\n

So you don't have to downgrade your Keras to 2.2.0 just change:

\n\n
from keras.applications.imagenet_utils import _obtain_input_shape\n
\n\n

to

\n\n
from keras_applications.imagenet_utils import _obtain_input_shape\n
\n", "system": "" }, { "instruction": "Why does my training loss have regular spikes?", "input": "", "output": "

I've figured it out myself:

\n\n

TL;DR:

\n\n

Make sure your loss magnitude is independent of your mini-batch size.

\n\n

The long explanation:

\n\n

In my case the issue was Keras-specific after all.

\n\n

Maybe the solution to this problem will be useful for someone at some point.

\n\n

It turns out that Keras divides the loss by the mini-batch size. The important thing to understand here is that it's not the loss function itself that averages over the batch size, but rather the averaging happens somewhere else in the training process.

\n\n

Why does this matter?

\n\n

The model I am training, SSD, uses a rather complicated multi-task loss function that does its own averaging (not by the batch size, but by the number of ground truth bounding boxes in the batch). Now if the loss function already divides the loss by some number that is correlated with the batch size, and afterwards Keras divides by the batch size a second time, then all of a sudden the magnitude of the loss value starts to depend on the batch size (to be precise, it becomes inversely proportional to the batch size).

\n\n

Now usually the number of samples in your dataset is not an integer multiple of the batch size you choose, so the very last mini-batch of an epoch (here I implicitly define an epoch as one full pass over the dataset) will end up containing fewer samples than the batch size. This is what messes up the magnitude of the loss if it depends on the batch size, and in turn messes up the magnitude of gradient. Since I'm using an optimizer with momentum, that messed up gradient continues influencing the gradients of a few subsequent training steps, too.

\n\n

Once I adjusted the loss function by multiplying the loss by the batch size (thus reverting Keras' subsequent division by the batch size), everything was fine: No more spikes in the loss.

\n", "system": "" }, { "instruction": "Extremely slow model load with keras", "input": "", "output": "

I solved the problem by clearing the keras session before each load

\n\n
from keras import backend as K\nfor i in range(...):\n  K.clear_session()\n  model = load_model(...)\n
\n", "system": "" }, { "instruction": "How the number of parameters associated with BatchNormalization layer is 2048?", "input": "", "output": "

These 2048 parameters are in fact [gamma weights, beta weights, moving_mean(non-trainable), moving_variance(non-trainable)], each having 512 elements (the size of the input layer).

\n", "system": "" }, { "instruction": "Implementing skip connections in keras", "input": "", "output": "

The easy answer is don't use a sequential model for this, use the functional API instead, implementing skip connections (also called residual connections) are then very easy, as shown in this example from the functional API guide:

\n\n
from keras.layers import merge, Convolution2D, Input\n\n# input tensor for a 3-channel 256x256 image\nx = Input(shape=(3, 256, 256))\n# 3x3 conv with 3 output channels (same as input channels)\ny = Convolution2D(3, 3, 3, border_mode='same')(x)\n# this returns x + y.\nz = merge([x, y], mode='sum')\n
\n", "system": "" }, { "instruction": "Keras-tuner Hyperband runing only 2 epochs", "input": "", "output": "

you can change the factor parameter to change that.\nBy default it is set to 3, but you can increase this number to get more than 2 epoch per trial

\n

see : docs

\n
\n

The Hyperband tuning algorithm uses adaptive resource allocation and early-stopping to quickly converge on a high-performing model. This is done using a sports championship style bracket. The algorithm trains a large number of models for a few epochs and carries forward only the top-performing half of models to the next round. Hyperband determines the number of models to train in a bracket by computing 1 + logfactor(max_epochs) and rounding it up to the nearest integer.

\n
\n", "system": "" }, { "instruction": "Why Bother With Recurrent Neural Networks For Structured Data?", "input": "", "output": "

In practice even in NLP you see that RNNs and CNNs are often competitive. Here's a 2017 review paper that shows this in more detail. In theory it might be the case that RNNs can handle the full complexity and sequential nature of language better but in practice the bigger obstacle is usually properly training the network and RNNs are finicky.

\n\n

Another problem that might have a chance of working would be to look at a problem like the balanced parenthesis problem (either with just parentheses in the strings or parentheses along with other distractor characters). This requires processing the inputs sequentially and tracking some state and might be easier to learn with a LSTM then a FFN.

\n\n

Update:\nSome data that looks sequential might not actually have to be treated sequentially. For example even if you provide a sequence of numbers to add since addition is commutative a FFN will do just as well as a RNN. This could also be true of many health problems where the dominating information is not of a sequential nature. Suppose every year a patient's smoking habits are measured. From a behavioral standpoint the trajectory is important but if you're predicting whether the patient will develop lung cancer the prediction will be dominated by just the number of years the patient smoked (maybe restricted to the last 10 years for the FFN).

\n\n

So you want to make the toy problem more complex and to require taking into account the ordering of the data. Maybe some kind of simulated time series, where you want to predict whether there was a spike in the data, but you don't care about absolute values just about the relative nature of the spike.

\n\n

Update2

\n\n

I modified your code to show a case where RNNs perform better. The trick was to use more complex conditional logic that is more naturally modeled in LSTMs than FFNs. The code is below. For 8 columns we see that the FFN trains in 1 minute and reaches a validation loss of 6.3. The LSTM takes 3x longer to train but it's final validation loss is 6x lower at 1.06.

\n\n

As we increase the number of columns the LSTM has a larger and larger advantage, especially if we added more complicated conditions in. For 16 columns the FFNs validation loss is 19 (and you can more clearly see the training curve as the model isn't able to instantly fit the data). In comparison the LSTM takes 11 times longer to train but has a validation loss of 0.31, 30 times smaller than the FFN! You can play around with even larger matrices to see how far this trend will extend.

\n\n
from keras import models\nfrom keras import layers\n\nfrom keras.layers import Dense, LSTM\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport time\n\nmatplotlib.use('Agg')\n\nnp.random.seed(20180908)\n\nrows = 20500\ncols = 10\n\n# Randomly generate Z\nZ = 100*np.random.uniform(0.05, 1.0, size = (rows, cols))\n\nlarger = np.max(Z[:, :cols/2], axis=1).reshape((rows, 1))\nlarger2 = np.max(Z[:, cols/2:], axis=1).reshape((rows, 1))\nsmaller = np.min((larger, larger2), axis=0)\n# Z is now the max of the first half of the array.\nZ = np.append(Z, larger, axis=1)\n# Z is now the min of the max of each half of the array.\n# Z = np.append(Z, smaller, axis=1)\n\n# Combine and shuffle.\n\n#Z = np.concatenate((Z_sum, Z_avg), axis = 0)\n\nnp.random.shuffle(Z)\n\n## Training and validation data.\n\nsplit = 10000\n\nX_train = Z[:split, :-1]\nX_valid = Z[split:, :-1]\nY_train = Z[:split, -1:].reshape(split, 1)\nY_valid = Z[split:, -1:].reshape(rows - split, 1)\n\nprint(X_train.shape)\nprint(Y_train.shape)\nprint(X_valid.shape)\nprint(Y_valid.shape)\n\nprint(\"Now setting up the FNN\")\n\n## FNN model.\n\ntick = time.time()\n\n# Define model.\n\nnetwork_fnn = models.Sequential()\nnetwork_fnn.add(layers.Dense(32, activation = 'relu', input_shape = (X_train.shape[1],)))\nnetwork_fnn.add(Dense(1, activation = None))\n\n# Compile model.\n\nnetwork_fnn.compile(optimizer = 'adam', loss = 'mean_squared_error')\n\n# Fit model.\n\nhistory_fnn = network_fnn.fit(X_train, Y_train, epochs = 500, batch_size = 128, verbose = False,\n    validation_data = (X_valid, Y_valid))\n\ntock = time.time()\n\nprint()\nprint(str('%.2f' % ((tock - tick) / 60)) + ' minutes.')\n\nprint(\"Now evaluating the FNN\")\n\nloss_fnn = history_fnn.history['loss']\nval_loss_fnn = history_fnn.history['val_loss']\nepochs_fnn = range(1, len(loss_fnn) + 1)\nprint(\"train loss: \", loss_fnn[-1])\nprint(\"validation loss: \", val_loss_fnn[-1])\n\nplt.plot(epochs_fnn, loss_fnn, 'black', label = 'Training Loss')\nplt.plot(epochs_fnn, val_loss_fnn, 'red', label = 'Validation Loss')\nplt.title('FNN: Training and Validation Loss')\nplt.legend()\nplt.show()\n\nplt.scatter(Y_train, network_fnn.predict(X_train), alpha = 0.1)\nplt.xlabel('Actual')\nplt.ylabel('Predicted')\nplt.title('training points')\nplt.show()\n\nplt.scatter(Y_valid, network_fnn.predict(X_valid), alpha = 0.1)\nplt.xlabel('Actual')\nplt.ylabel('Predicted')\nplt.title('valid points')\nplt.show()\n\nprint(\"LSTM\")\n\n## LSTM model.\n\nX_lstm_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1)\nX_lstm_valid = X_valid.reshape(X_valid.shape[0], X_valid.shape[1], 1)\n\ntick = time.time()\n\n# Define model.\n\nnetwork_lstm = models.Sequential()\nnetwork_lstm.add(layers.LSTM(32, activation = 'relu', input_shape = (X_lstm_train.shape[1], 1)))\nnetwork_lstm.add(layers.Dense(1, activation = None))\n\n# Compile model.\n\nnetwork_lstm.compile(optimizer = 'adam', loss = 'mean_squared_error')\n\n# Fit model.\n\nhistory_lstm = network_lstm.fit(X_lstm_train, Y_train, epochs = 500, batch_size = 128, verbose = False,\n    validation_data = (X_lstm_valid, Y_valid))\n\ntock = time.time()\n\nprint()\nprint(str('%.2f' % ((tock - tick) / 60)) + ' minutes.')\n\nprint(\"now eval\")\n\nloss_lstm = history_lstm.history['loss']\nval_loss_lstm = history_lstm.history['val_loss']\nepochs_lstm = range(1, len(loss_lstm) + 1)\nprint(\"train loss: \", loss_lstm[-1])\nprint(\"validation loss: \", val_loss_lstm[-1])\n\nplt.plot(epochs_lstm, loss_lstm, 'black', label = 'Training Loss')\nplt.plot(epochs_lstm, val_loss_lstm, 'red', label = 'Validation Loss')\nplt.title('LSTM: Training and Validation Loss')\nplt.legend()\nplt.show()\n\nplt.scatter(Y_train, network_lstm.predict(X_lstm_train), alpha = 0.1)\nplt.xlabel('Actual')\nplt.ylabel('Predicted')\nplt.title('training')\nplt.show()\n\nplt.scatter(Y_valid, network_lstm.predict(X_lstm_valid), alpha = 0.1)\nplt.xlabel('Actual')\nplt.ylabel('Predicted')\nplt.title(\"validation\")\nplt.show()\n
\n", "system": "" }, { "instruction": "How To Determine the 'filter' Parameter in the Keras Conv2D Function", "input": "", "output": "

Actually - there is no a good answer to your question. Most of the architectures are usually carefully designed and finetuned during many experiments. I could share with you some of the rules of thumbs one should apply when designing its own architecture:

\n\n
    \n
  1. Avoid a dimension collapse in the first layer. Let's assume that your input filter has a (n, n) spatial shape for RGB image. In this case, it is a good practice to set the filter numbers to be greater than n * n * 3 as this is the dimensionality of the input of a single filter. If you set smaller number - you could suffer from the fact that many useful pieces of information about the image are lost due to initialization which dropped informative dimensions. Of course - this is not a general rule - e.g. for a texture recognition, where image complexity is lower - a small number of filters might actually help.

  2. \n
  3. Think more about volume than filters number - when setting the number of filters it's important to think about the volume change instead of the change of filter numbers between the consecutive layers. E.g. in VGG - even though the number of filters doubles after pooling layer - the actual feature map volume is decreased by a factor of 2, because of pooling decreasing the feature map by a factor of 4. Usually decreasing the size of the volume by more than 3 should be considered as a bad practice. Most of the modern architectures use the volume drop factor in the range between 1 and 2. Still - this is not a general rule - e.g. in case of a narrow hierarchy - the greater value of volume drop might actually help.

  4. \n
  5. Avoid bottlenecking. As one may read in this milestone paper bottlenecking might seriously harm your training process. It occurs when dropping the volume is too severe. Of course - this still might be achieved - but then you should use the intelligent downsampling, used e.g. in Inception v>2

  6. \n
  7. Check 1x1 convolutions - it's believed that filters activation are highly correlated. One may take advantage of it by using 1x1 convolutions - namely convolution with a filter size of 1. This makes possible e.g. volume dropping by them instead of pooling or intelligent downsampling (see example here). You could e.g. build twice more filters and then cut 25% of them by using 1x1 convs as a consecutive layer.

  8. \n
\n\n

As you may see. There is no easy way to choose the number of filters. Except for the hints above, I'd like to share with you one of my favorite sanity checks on the number of filters. It takes 2 easy steps:

\n\n
    \n
  1. Try to overfit at 500 random images with regularization.
  2. \n
  3. Try to overfit at the whole dataset without any regularization.
  4. \n
\n\n

Usually - if the number of filters is too low (in general) - these two tests will show you that. If - during your training process - with regularization - your network severely overfits - this is a clear indicator that your network has way too many filters.

\n\n

Cheers.

\n", "system": "" }, { "instruction": "What's the difference between LSTM() and LSTMCell()?", "input": "", "output": "\n\n

A recurrent layer contains a cell object. The cell contains the core code for the calculations of each step, while the recurrent layer commands the cell and performs the actual recurrent calculations.

\n\n

Usually, people use LSTM layers in their code.
\nOr they use RNN layers containing LSTMCell.

\n\n

Both things are almost the same. An LSTM layer is a RNN layer using an LSTMCell, as you can check out in the source code.

\n\n

About the number of cells:

\n\n

Alghout it seems, because of its name, that LSTMCell is a single cell, it is actually an object that manages all the units/cells as we may think. In the same code mentioned, you can see that the units argument is used when creating an instance of LSTMCell.

\n", "system": "" }, { "instruction": "What is the difference between the predict and predict_on_batch methods of a Keras model?", "input": "", "output": "

The difference lies in when you pass as x data that is larger than one batch.

\n\n

predict will go through all the data, batch by batch, predicting labels.\nIt thus internally does the splitting in batches and feeding one batch at a time.

\n\n

predict_on_batch, on the other hand, assumes that the data you pass in is exactly one batch and thus feeds it to the network. It won't try to split it (which, depending on your setup, might prove problematic for your GPU memory if the array is very big)

\n", "system": "" }, { "instruction": "AttributeError: 'Model' object has no attribute 'predict_classes'", "input": "", "output": "

The predict_classes method is only available for the Sequential class (which is the class of your first model) but not for the Model class (the class of your second model).

\n\n

With the Model class, you can use the predict method which will give you a vector of probabilities and then get the argmax of this vector (with np.argmax(y_pred1,axis=1)).

\n", "system": "" }, { "instruction": "Difference between Dense and Activation layer in Keras", "input": "", "output": "

Using Dense(activation=softmax) is computationally equivalent to first add Dense and then add Activation(softmax). However there is one advantage of the second approach - you could retrieve the outputs of the last layer (before activation) out of such defined model. In the first approach - it's impossible.

\n", "system": "" }, { "instruction": "Where to find a documentation about default weight initializer in Keras?", "input": "", "output": "

Each layer has its own default value for initializing the weights. For most of the layers, such as Dense, convolution and RNN layers, the default kernel initializer is 'glorot_uniform' and the default bias intializer is 'zeros' (you can find this by going to the related section for each layer in the documentation; for example here is the Dense layer doc). You can find the definition of glorot_uniform initializer here in the Keras documentation.

\n

As for accessing the weights of each layer, it has already been answered here.

\n", "system": "" }, { "instruction": "How to standard scale a 3D matrix?", "input": "", "output": "

With only 3 line of code...

\n\n
scaler = StandardScaler()\nX_train = scaler.fit_transform(X_train.reshape(-1, X_train.shape[-1])).reshape(X_train.shape)\nX_test = scaler.transform(X_test.reshape(-1, X_test.shape[-1])).reshape(X_test.shape)\n
\n", "system": "" }, { "instruction": "Keras Tokenizer num_words doesn't seem to work", "input": "", "output": "

There is nothing wrong in what you are doing. word_index is computed the same way no matter how many most frequent words you will use later (as you may see here). So when you will call any transformative method - Tokenizer will use only three most common words and at the same time, it will keep the counter of all words - even when it's obvious that it will not use it later.

\n", "system": "" }, { "instruction": "How to Implement the Conv1DTranspose in keras?", "input": "", "output": "

Use keras backend to fit the input tensor to 2D transpose convolution. Do not always use transpose operation for it will consume a lot of time.

\n\n
import keras.backend as K\nfrom keras.layers import Conv2DTranspose, Lambda\n\n\ndef Conv1DTranspose(input_tensor, filters, kernel_size, strides=2, padding='same'):\n    \"\"\"\n        input_tensor: tensor, with the shape (batch_size, time_steps, dims)\n        filters: int, output dimension, i.e. the output tensor will have the shape of (batch_size, time_steps, filters)\n        kernel_size: int, size of the convolution kernel\n        strides: int, convolution step size\n        padding: 'same' | 'valid'\n    \"\"\"\n    x = Lambda(lambda x: K.expand_dims(x, axis=2))(input_tensor)\n    x = Conv2DTranspose(filters=filters, kernel_size=(kernel_size, 1), strides=(strides, 1), padding=padding)(x)\n    x = Lambda(lambda x: K.squeeze(x, axis=2))(x)\n    return x\n
\n", "system": "" }, { "instruction": "How to save Scikit-Learn-Keras Model into a Persistence File (pickle/hd5/json/yaml)", "input": "", "output": "

Edit 1 : Original answer about saving model

\n

With HDF5 :

\n
# saving model\njson_model = model_tt.model.to_json()\nopen('model_architecture.json', 'w').write(json_model)\n# saving weights\nmodel_tt.model.save_weights('model_weights.h5', overwrite=True)\n\n\n# loading model\nfrom keras.models import model_from_json\n\nmodel = model_from_json(open('model_architecture.json').read())\nmodel.load_weights('model_weights.h5')\n\n# dont forget to compile your model\nmodel.compile(loss='binary_crossentropy', optimizer='adam')\n
\n

Edit 2 : full code example with iris dataset

\n
# Train model and make predictions\nimport numpy\nimport pandas\nfrom keras.models import Sequential, model_from_json\nfrom keras.layers import Dense\nfrom keras.utils import np_utils\nfrom sklearn import datasets\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import LabelEncoder\n\n# fix random seed for reproducibility\nseed = 7\nnumpy.random.seed(seed)\n\n# load dataset\niris = datasets.load_iris()\nX, Y, labels = iris.data, iris.target, iris.target_names\nX = preprocessing.scale(X)\n\n# encode class values as integers\nencoder = LabelEncoder()\nencoder.fit(Y)\nencoded_Y = encoder.transform(Y)\n\n# convert integers to dummy variables (i.e. one hot encoded)\ny = np_utils.to_categorical(encoded_Y)\n\ndef build_model():\n    # create model\n    model = Sequential()\n    model.add(Dense(4, input_dim=4, init='normal', activation='relu'))\n    model.add(Dense(3, init='normal', activation='sigmoid'))\n    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n    return model\n\ndef save_model(model):\n    # saving model\n    json_model = model.to_json()\n    open('model_architecture.json', 'w').write(json_model)\n    # saving weights\n    model.save_weights('model_weights.h5', overwrite=True)\n\ndef load_model():\n    # loading model\n    model = model_from_json(open('model_architecture.json').read())\n    model.load_weights('model_weights.h5')\n    model.compile(loss='categorical_crossentropy', optimizer='adam')\n    return model\n\n\nX_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.3, random_state=seed)\n\n# build\nmodel = build_model()\nmodel.fit(X_train, Y_train, nb_epoch=200, batch_size=5, verbose=0)\n\n# save\nsave_model(model)\n\n# load\nmodel = load_model()\n\n# predictions\npredictions = model.predict_classes(X_test, verbose=0)\nprint(predictions)\n# reverse encoding\nfor pred in predictions:\n    print(labels[pred])\n
\n

Please note that I used Keras only, not the wrapper. It only add some complexity in something simple. Also code is voluntarily not refactored so you can have the whole picture.

\n

Also, you said you want to output 1 or 0. It is not possible in this dataset because you have 3 output dims and classes (Iris-setosa, Iris-versicolor, Iris-virginica). If you had only 2 classes then your output dim and classes would be 0 or 1 using sigmoid output fonction.

\n", "system": "" }, { "instruction": "What does initial_epoch in Keras mean?", "input": "", "output": "

Since in some of the optimizers, some of their internal values (e.g. learning rate) are set using the current epoch value, or even you may have (custom) callbacks that depend on the current value of epoch, the initial_epoch argument let you specify the initial value of epoch to start from when training.

\n\n

As stated in the documentation, this is mostly useful when you have trained your model for some epochs, say 10, and then saved it and now you want to load it and resume the training for another 10 epochs without disrupting the state of epoch-dependent objects (e.g. optimizer). So you would set initial_epoch=10 (i.e. we have trained the model for 10 epochs) and epochs=20 (not 10, since the total number of epochs to reach is 20) and then everything resume as if you were initially trained the model for 20 epochs in one single training session.

\n\n

However, note that when using built-in optimizers of Keras you don't need to use initial_epoch, since they store and update their state internally (without considering the value of current epoch) and also when saving a model the state of the optimizer will be stored as well.

\n", "system": "" }, { "instruction": "Accessing validation data within a custom callback", "input": "", "output": "

You can iterate directly over self.validation_data to aggregate all the validation data at the end of each epoch. If you want to calculate precision, recall and F1 across the complete validation dataset:

\n\n
# Validation metrics callback: validation precision, recall and F1\n# Some of the code was adapted from https://medium.com/@thongonary/how-to-compute-f1-score-for-each-epoch-in-keras-a1acd17715a2\nclass Metrics(callbacks.Callback):\n\n    def on_train_begin(self, logs={}):\n        self.val_f1s = []\n        self.val_recalls = []\n        self.val_precisions = []\n\n    def on_epoch_end(self, epoch, logs):\n        # 5.4.1 For each validation batch\n        for batch_index in range(0, len(self.validation_data)):\n            # 5.4.1.1 Get the batch target values\n            temp_targ = self.validation_data[batch_index][1]\n            # 5.4.1.2 Get the batch prediction values\n            temp_predict = (np.asarray(self.model.predict(\n                                self.validation_data[batch_index][0]))).round()\n            # 5.4.1.3 Append them to the corresponding output objects\n            if(batch_index == 0):\n                val_targ = temp_targ\n                val_predict = temp_predict\n            else:\n                val_targ = np.vstack((val_targ, temp_targ))\n                val_predict = np.vstack((val_predict, temp_predict))\n\n        val_f1 = round(f1_score(val_targ, val_predict), 4)\n        val_recall = round(recall_score(val_targ, val_predict), 4)\n        val_precis = round(precision_score(val_targ, val_predict), 4)\n\n        self.val_f1s.append(val_f1)\n        self.val_recalls.append(val_recall)\n        self.val_precisions.append(val_precis)\n\n        # Add custom metrics to the logs, so that we can use them with\n        # EarlyStop and csvLogger callbacks\n        logs[\"val_f1\"] = val_f1\n        logs[\"val_recall\"] = val_recall\n        logs[\"val_precis\"] = val_precis\n\n        print(\"\u2014 val_f1: {} \u2014 val_precis: {} \u2014 val_recall {}\".format(\n                 val_f1, val_precis, val_recall))\n        return\n\nvalid_metrics = Metrics()\n
\n\n

Then you can add valid_metrics to the callback argument:

\n\n
your_model.fit_generator(..., callbacks = [valid_metrics])\n
\n\n

Be sure to put it at the beginning of the callbacks in case you want other callbacks to use these measures.

\n", "system": "" }, { "instruction": "Defining model in keras (include_top = True)", "input": "", "output": "

Most of these models are a series of convolutional layers followed by one or a few dense (or fully connected) layers.

\n\n

Include_top lets you select if you want the final dense layers or not.

\n\n\n\n

About the weights:

\n\n\n\n

Because of this, removing the final dense layers allows you to define the input size (see in documentation). (And the output size will increase/decrease accordingly).

\n\n

But you lose the interpretation/classification layers. (You can add your own, depending on your task)

\n\n
\n\n

Extra info on Poolings and Flatten

\n\n

Global poolings:

\n\n

After the last convolutional layers, your outputs are still like images. They have shape (images, X, Y, channels), where X and Y are spatial dimensions of a 2D image.

\n\n

When your model has GlobalMaxPooling2D or GlobalAveragePooling2D, it will eliminate the spatial dimensions. With Max it will take only the highest value pixel for each channel. With Average it will take the mean value of each channel. The result will be just (images, channels), without spatial dimensions anymore.

\n\n\n\n

Flatten

\n\n

With flatten, the spatial dimensions will not be lost, but they will be transformed in features. From (images, X, Y, channels) to (images, X*Y*channels).

\n\n

This will require fixed input shapes, because X and Y must be defined, and if you add Dense layers after the flatten, the Dense layer will need a fixed number of features.

\n", "system": "" }, { "instruction": "Keras: model.evaluate vs model.predict accuracy difference in multi-class NLP task", "input": "", "output": "

I have found the problem. metrics=['accuracy'] calculates accuracy automatically from cost function. So using binary_crossentropy shows binary accuracy, not categorical accuracy. Using categorical_crossentropy automatically switches to categorical accuracy and now it is the same as calculated manually using model1.predict(). Yu-Yang was right to point out the cost function and activation function for multi-class problem.

\n\n

P.S: One can get both categorical and binary accuracy by using metrics=['binary_accuracy', 'categorical_accuracy']

\n", "system": "" }, { "instruction": "Keras: "RuntimeError: Failed to import pydot." after installing graphviz and pydot", "input": "", "output": "

The error message is a bit misleading, as you can see here. The problem is that graphviz is not installed.

\n\n

But you mention that graphviz was installed using pip. This is also misleading, since that graphviz package is just a python wrapper, and the graphviz binaries have to be installed separately for the python wrapper to work.

\n", "system": "" }, { "instruction": "keras: what is the difference between model.predict and model.predict_proba", "input": "", "output": "

predict

\n\n
predict(self, x, batch_size=32, verbose=0)\n
\n\n

Generates output predictions for the input samples, processing the samples in a batched way.

\n\n

Arguments

\n\n
x: the input data, as a Numpy array.\nbatch_size: integer.\nverbose: verbosity mode, 0 or 1.\n
\n\n

Returns

\n\n
A Numpy array of predictions.\n
\n\n

predict_proba

\n\n
predict_proba(self, x, batch_size=32, verbose=1)\n
\n\n

Generates class probability predictions for the input samples batch by batch.

\n\n

Arguments

\n\n
x: input data, as a Numpy array or list of Numpy arrays (if the model has multiple inputs).\nbatch_size: integer.\nverbose: verbosity mode, 0 or 1.\n
\n\n

Returns

\n\n
A Numpy array of probability predictions.\n
\n\n

Edit: In the recent version of keras, predict and predict_proba is same i.e. both give probabilities. To get the class labels use predict_classes. The documentation is not updated. (adapted from Avijit Dasgupta's comment)

\n", "system": "" }, { "instruction": "I am having trouble with this error (-215:Assertion failed) !ssize.empty() in function 'resize' in OpenCV", "input": "", "output": "

I was supposed to add a try/exception so my code could bypass "ugly" images:

\n
try:\n    path=os.path.join(mypath, n)\n    img=cv2.imread(path, cv2.IMREAD_GRAYSCALE)\n    img=cv2.resize(img, (img_rows, img_cols))\n\nexcept Exception as e:\n    print(str(e))\n
\n

cv2.resize() was where it was supposed to catch the error since it couldn't resize a "broken" image.

\n", "system": "" }, { "instruction": "What is the difference between the terms accuracy and validation accuracy", "input": "", "output": "

When training a machine learning model, one of the main things that you want to avoid would be overfitting. This is when your model fits the training data well, but it isn't able to generalize and make accurate predictions for data it hasn't seen before.

\n

To find out if their model is overfitting, data scientists use a technique called cross-validation, where they split their data into two parts - the training set, and the validation set. The training set is used to train the model, while the validation set is only used to evaluate the model's performance.

\n

Metrics on the training set let you see how your model is progressing in terms of its training, but it's metrics on the validation set that let you get a measure of the quality of your model - how well it's able to make new predictions based on data it hasn't seen before.

\n

With this in mind, loss and acc are measures of loss and accuracy on the training set, while val_loss and val_acc are measures of loss and accuracy on the validation set.

\n

At the moment your model has an accuracy of ~86% on the training set and ~84% on the validation set. This means that you can expect your model to perform with ~84% accuracy on new data.

\n

I notice that as your epochs goes from 23 to 25, your acc metric increases, while your val_acc metric decreases. This means that your model is fitting the training set better, but is losing its ability to predict on new data, indicating that your model is starting to fit on noise and is beginning to overfit.

\n

So that is a quick explanation on validation metrics and how to interpret them.

\n", "system": "" }, { "instruction": "How to setup 1D-Convolution and LSTM in Keras", "input": "", "output": "

If you want to predict one value for each timestep, two slightly different solutions come to my mind:

\n\n

1) Remove the MaxPooling1D layer, add the padding='same' argument to Conv1D layer and add return_sequence=True argument to LSTM so that the LSTM returns the output of each timestep:

\n\n
from keras.layers import Input, Dense, LSTM, MaxPooling1D, Conv1D\nfrom keras.models import Model\n\ninput_layer = Input(shape=(400, 16))\nconv1 = Conv1D(filters=32,\n               kernel_size=8,\n               strides=1,\n               activation='relu',\n               padding='same')(input_layer)\nlstm1 = LSTM(32, return_sequences=True)(conv1)\noutput_layer = Dense(1, activation='sigmoid')(lstm1)\nmodel = Model(inputs=input_layer, outputs=output_layer)\n\nmodel.summary()\n
\n\n

The model summary would be:

\n\n
Layer (type)                 Output Shape              Param #   \n=================================================================\ninput_4 (InputLayer)         (None, 400, 16)           0         \n_________________________________________________________________\nconv1d_4 (Conv1D)            (None, 400, 32)           4128      \n_________________________________________________________________\nlstm_4 (LSTM)                (None, 400, 32)           8320      \n_________________________________________________________________\ndense_4 (Dense)              (None, 400, 1)            33        \n=================================================================\nTotal params: 12,481\nTrainable params: 12,481\nNon-trainable params: 0\n_________________________________________________________________\n
\n\n

2) Just change the number of units in the Dense layer to 400 and reshape y to (n_samples, n_timesteps):

\n\n
from keras.layers import Input, Dense, LSTM, MaxPooling1D, Conv1D\nfrom keras.models import Model\n\ninput_layer = Input(shape=(400, 16))\nconv1 = Conv1D(filters=32,\n               kernel_size=8,\n               strides=1,\n               activation='relu')(input_layer)\npool1 = MaxPooling1D(pool_size=4)(conv1)\nlstm1 = LSTM(32)(pool1)\noutput_layer = Dense(400, activation='sigmoid')(lstm1)\nmodel = Model(inputs=input_layer, outputs=output_layer)\n\nmodel.summary()\n
\n\n

The model summary would be:

\n\n
_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\ninput_6 (InputLayer)         (None, 400, 16)           0         \n_________________________________________________________________\nconv1d_6 (Conv1D)            (None, 393, 32)           4128      \n_________________________________________________________________\nmax_pooling1d_5 (MaxPooling1 (None, 98, 32)            0         \n_________________________________________________________________\nlstm_6 (LSTM)                (None, 32)                8320      \n_________________________________________________________________\ndense_6 (Dense)              (None, 400)               13200     \n=================================================================\nTotal params: 25,648\nTrainable params: 25,648\nNon-trainable params: 0\n_________________________________________________________________\n
\n\n

Don't forget that in both cases you must use 'binary_crossentropy' (not 'categorical_crossentropy') as the loss function. I expect this solution to have a lower accuracy than the solution #1; but you must experiment with both and try to change the parameters since it entirely depends on the specific problem you are trying to solve and the nature of the data you have.

\n\n
\n\n

Update:

\n\n

You asked for a convolution layer that only covers one timestep and k adjacent features. Yes, you can do it using a Conv2D layer:

\n\n
# first add an axis to your data\nX = np.expand_dims(X)   # now X has a shape of (n_samples, n_timesteps, n_feats, 1)\n\n# adjust input layer shape ...\nconv2 = Conv2D(n_filters, (1, k), ...)   # covers one timestep and k features\n# adjust other layers according to the output of convolution layer...\n
\n\n

Although I have no idea why you are doing this, to use the output of the convolution layer (which is (?, n_timesteps, n_features, n_filters), one solution is to use a LSTM layer which is wrapped inside a TimeDistributed layer. Or another solution is to flatten the last two axis.

\n", "system": "" }, { "instruction": "How to use fit_generator with multiple inputs", "input": "", "output": "

Try this generator:

\n\n
def generator_two_img(X1, X2, y, batch_size):\n    genX1 = gen.flow(X1, y,  batch_size=batch_size, seed=1)\n    genX2 = gen.flow(X2, y, batch_size=batch_size, seed=1)\n    while True:\n        X1i = genX1.next()\n        X2i = genX2.next()\n        yield [X1i[0], X2i[0]], X1i[1]\n
\n

Generator for 3 inputs:

\n
def generator_three_img(X1, X2, X3, y, batch_size):\n    genX1 = gen.flow(X1, y,  batch_size=batch_size, seed=1)\n    genX2 = gen.flow(X2, y, batch_size=batch_size, seed=1)\n    genX3 = gen.flow(X3, y, batch_size=batch_size, seed=1)\n    while True:\n        X1i = genX1.next()\n        X2i = genX2.next()\n        X3i = genX3.next()\n        yield [X1i[0], X2i[0], X3i[0]], X1i[1]\n
\n

EDIT (add generator, output image and numpy array, and target)

\n
#X1 is an image, y is the target, X2 is a numpy array - other data input        \ndef gen_flow_for_two_inputs(X1, X2, y):\n    genX1 = gen.flow(X1,y,  batch_size=batch_size,seed=666)\n    genX2 = gen.flow(X1,X2, batch_size=batch_size,seed=666)\n    while True:\n        X1i = genX1.next()\n        X2i = genX2.next()\n        #Assert arrasy are equal - this was for peace of mind, but slows down training\n        #np.testing.assert_array_equal(X1i[0],X2i[0])\n        yield [X1i[0], X2i[1]], X1i[1]\n
\n", "system": "" }, { "instruction": "How to output per-class accuracy in Keras\uff1f", "input": "", "output": "

Precision & recall are more useful measures for multi-class classification (see definitions). Following the Keras MNIST CNN example (10-class classification), you can get the per-class measures using classification_report from sklearn.metrics:

\n\n\n\n
from sklearn.metrics import classification_report\nimport numpy as np\n\nY_test = np.argmax(y_test, axis=1) # Convert one-hot to index\ny_pred = model.predict_classes(x_test)\nprint(classification_report(Y_test, y_pred))\n
\n\n

Here is the result:

\n\n
         precision    recall  f1-score   support\n\n      0       0.99      1.00      1.00       980\n      1       0.99      0.99      0.99      1135\n      2       1.00      0.99      0.99      1032\n      3       0.99      0.99      0.99      1010\n      4       0.98      1.00      0.99       982\n      5       0.99      0.99      0.99       892\n      6       1.00      0.99      0.99       958\n      7       0.97      1.00      0.99      1028\n      8       0.99      0.99      0.99       974\n      9       0.99      0.98      0.99      1009\n\navg / total   0.99      0.99      0.99     10000\n
\n", "system": "" }, { "instruction": "How to train the network only on one output when there are multiple outputs?", "input": "", "output": "
\n

I only want to train the network on output y2.

\n
\n\n

Based on Keras functional API guide you can achieve that with

\n\n
model1 = Model(input=x, output=[y2,y3])   \nmodel1.compile(optimizer='sgd', loss=custom_loss_function,\n                  loss_weights=[1., 0.0])\n
\n\n
\n

What is the shape/structure of the y_pred and y_true argument in loss\n function when multiple outputs are used? Can I access them as above?\n Is it y_pred[0] or y_pred[:,0]

\n
\n\n

In keras multi-output models loss function is applied for each output separately. In pseudo-code:

\n\n
loss = sum( [ loss_function( output_true, output_pred ) for ( output_true, output_pred ) in zip( outputs_data, outputs_model ) ] )\n
\n\n

The functionality to do loss function on multiple outputs seems unavailable to me. One probably could achieve that by incorporating the loss function as a layer of the network.

\n", "system": "" }, { "instruction": "Keras error : Expected to see 1 array", "input": "", "output": "

Your error comes from the fact that your X for some reason wasn't transformed to a numpy.array. In this your X is treated as a list of rows and this is a reason behind your error message (that it expected one input instead of list which has a number of rows elements). Transformation:

\n\n
X = numpy.array(X)\nY = numpy.array(Y)\n
\n\n

I would check a data loading process because something might go wrong there.

\n\n

UPDATE:

\n\n

As it was mentioned in a comment - input_shape need to be changed to input_dim.

\n\n

UPDATE 2:

\n\n

In order to keep input_shape one should change to it to input_shape=(200,).

\n", "system": "" }, { "instruction": "TensorFlow/Keras multi-threaded model fitting", "input": "", "output": "

Tensorflow Graphs are not threadsafe (see https://www.tensorflow.org/api_docs/python/tf/Graph) and when you create a new Tensorflow Session, it by default uses the default graph.

\n\n

You can get around this by creating a new session with a new graph in your parallelized function and constructing your keras model there.

\n\n

Here is some code that creates and fits a model on each available gpu in parallel:

\n\n
import concurrent.futures\nimport numpy as np\n\nimport keras.backend as K\nfrom keras.layers import Dense\nfrom keras.models import Sequential\n\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib\n\ndef get_available_gpus():\n    local_device_protos = device_lib.list_local_devices()\n    return [x.name for x in local_device_protos if x.device_type == 'GPU']\n\nxdata = np.random.randn(100, 8)\nytrue = np.random.randint(0, 2, 100)\n\ndef fit(gpu):\n    with tf.Session(graph=tf.Graph()) as sess:\n        K.set_session(sess)\n        with tf.device(gpu):\n            model = Sequential()\n            model.add(Dense(12, input_dim=8, activation='relu'))\n            model.add(Dense(8, activation='relu'))\n            model.add(Dense(1, activation='sigmoid'))\n\n            model.compile(loss='binary_crossentropy', optimizer='adam')\n            model.fit(xdata, ytrue, verbose=0)\n\n            return model.evaluate(xdata, ytrue, verbose=0)\n\ngpus = get_available_gpus()\nwith concurrent.futures.ThreadPoolExecutor(len(gpus)) as executor:\n    results = [x for x in executor.map(fit, gpus)]\nprint('results: ', results)\n
\n", "system": "" }, { "instruction": "TypeError: Unable to convert function return value to a Python type! The signature was () -> handle", "input": "", "output": "

Running pip3 install numpy --upgrade solved this issue for me.

\n", "system": "" }, { "instruction": "Which loss function and metrics to use for multi-label classification with very high ratio of negatives to positives?", "input": "", "output": "

Categorical Cross-Entropy loss or Softmax Loss is a Softmax activation plus a Cross-Entropy loss. If we use this loss, we will train a CNN to output a probability over the C classes for each image. It is used for multi-class classification.

\n

What you want is multi-label classification, so you will use Binary Cross-Entropy Loss or Sigmoid Cross-Entropy loss. It is a Sigmoid activation plus a Cross-Entropy loss. Unlike Softmax loss it is independent for each vector component (class), meaning that the loss computed for every CNN output vector component is not affected by other component values. That\u2019s why it is used for multi-label classification, where the insight of an element belonging to a certain class should not influence the decision for another class.

\n

Now for handling class imbalance, you can use weighted Sigmoid Cross-Entropy loss. So you will penalize for wrong prediction based on the number/ratio of positive examples.

\n", "system": "" }, { "instruction": "Save model every 10 epochs tensorflow.keras v2", "input": "", "output": "

Using tf.keras.callbacks.ModelCheckpoint use save_freq='epoch' and pass an extra argument period=10.

\n\n

Although this is not documented in the official docs, that is the way to do it (notice it is documented that you can pass period, just doesn't explain what it does).

\n", "system": "" }, { "instruction": "Can't import tensorflow.keras in VS Code", "input": "", "output": "

The imports that were causing the issue for me:

\n
from tensorflow.keras.models import Model\nfrom tensorflow.keras.layers import Dense\n
\n

The way I resolved it:

\n
from tensorflow import keras\nfrom keras.models import Model\nfrom keras.layers import Dense\n
\n", "system": "" }, { "instruction": "The added layer must be an instance of class Layer. Found: <tensorflow.python.keras.engine.input_layer.InputLayer>", "input": "", "output": "

This won't work because a tensorflow.keras layer is getting added to a keras Model.

\n\n
vgg_model = tensorflow.keras.applications.vgg16.VGG16()\nmodel = keras.Sequential()\nmodel.add(vgg_model.layers[0])\n
\n\n

Instantiate tensorflow.keras.Sequential(). This will work.

\n\n
model = tensorflow.keras.Sequential()\nmodel.add(vgg_model.layers[0])\n
\n", "system": "" }, { "instruction": "Keras Sequential model with multiple inputs", "input": "", "output": "

To solve this problem you have two options.

\n\n

1. Using a sequential model

\n\n

You can concatenate both arrays into one before feeding to the network. Let's assume the two arrays have a shape of (Number_data_points, ), now the arrays can be merged using numpy.stack method.

\n\n
merged_array = np.stack([array_1, array_2], axis=1)\n\n
\n\n
model0 = keras.Sequential([\nkeras.layers.Dense(2, input_dim=2, activation=keras.activations.sigmoid, use_bias=True),\nkeras.layers.Dense(1, activation=keras.activations.relu, use_bias=True),\n])\n\nmodel0.fit(merged_array,output, batch_size=16, epochs=100)\n\n
\n\n

2. Using Functional API.

\n\n

This is the most recommened way to use when there are multiple inputs to the model.

\n\n
input1 = keras.layers.Input(shape=(1, ))\ninput2 = keras.layers.Input(shape=(1,))\nmerged = keras.layers.Concatenate(axis=1)([input1, input2])\ndense1 = keras.layers.Dense(2, input_dim=2, activation=keras.activations.sigmoid, use_bias=True)(merged)\noutput = keras.layers.Dense(1, activation=keras.activations.relu, use_bias=True)(dense1)\nmodel10 = keras.models.Model(inputs=[input1, input2], output=output)\n
\n\n

Now you can use the second method you have trying to fit to the model

\n\n
model0.fit([array_1, array_2],output, batch_size=16, epochs=100)\n\n
\n", "system": "" }, { "instruction": "Tensor is not an element of this graph; deploying Keras model", "input": "", "output": "

Flask uses multiple threads. The problem you are running into is because the tensorflow model is not loaded and used in the same thread. One workaround is to force tensorflow to use the gloabl default graph .

\n\n

Add this after you load your model

\n\n
global graph\ngraph = tf.get_default_graph() \n
\n\n

And inside your predict

\n\n
with graph.as_default():\n    y_hat = keras_model_loaded.predict(predict_request, batch_size=1, verbose=1)\n
\n", "system": "" }, { "instruction": "What function defines accuracy in Keras when the loss is mean squared error (MSE)?", "input": "", "output": "

There are at least two separate issues with your question.

\n

The first one should be clear by now from the comments by Dr. Snoopy and the other answer: accuracy is meaningless in a regression problem, such as yours; see also the comment by patyork in this Keras thread. For good or bad, the fact is that Keras will not "protect" you or any other user from putting not-meaningful requests in your code, i.e. you will not get any error, or even a warning, that you are attempting something that does not make sense, such as requesting the accuracy in a regression setting.

\n

Having clarified that, the other issue is:

\n

Since Keras does indeed return an "accuracy", even in a regression setting, what exactly is it and how is it calculated?

\n

To shed some light here, let's revert to a public dataset (since you do not provide any details about your data), namely the Boston house price dataset (saved locally as housing.csv), and run a simple experiment as follows:

\n\n
import numpy as np\nimport pandas\nimport keras\n\nfrom keras.models import Sequential\nfrom keras.layers import Dense\n\n# load dataset\ndataframe = pandas.read_csv("housing.csv", delim_whitespace=True, header=None)\ndataset = dataframe.values\n# split into input (X) and output (Y) variables\nX = dataset[:,0:13]\nY = dataset[:,13]\n\nmodel = Sequential()\nmodel.add(Dense(13, input_dim=13, kernel_initializer='normal', activation='relu'))\nmodel.add(Dense(1, kernel_initializer='normal'))\n# Compile model asking for accuracy, too:\nmodel.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])\n\nmodel.fit(X, Y,\n     batch_size=5,\n     epochs=100,\n     verbose=1)\n
\n

As in your case, the model fitting history (not shown here) shows a decreasing loss, and an accuracy roughly increasing. Let's evaluate now the model performance in the same training set, using the appropriate Keras built-in function:

\n
score = model.evaluate(X, Y, verbose=0)\nscore\n# [16.863721372581754, 0.013833992168483997]\n
\n

The exact contents of the score array depend on what exactly we have requested during model compilation; in our case here, the first element is the loss (MSE), and the second one is the "accuracy".

\n

At this point, let us have a look at the definition of Keras binary_accuracy in the metrics.py file:

\n
def binary_accuracy(y_true, y_pred):\n    return K.mean(K.equal(y_true, K.round(y_pred)), axis=-1)\n
\n

So, after Keras has generated the predictions y_pred, it first rounds them, and then checks to see how many of them are equal to the true labels y_true, before getting the mean.

\n

Let's replicate this operation using plain Python & Numpy code in our case, where the true labels are Y:

\n
y_pred = model.predict(X)\nl = len(Y)\nacc = sum([np.round(y_pred[i])==Y[i] for i in range(l)])/l\nacc\n# array([0.01383399])\n
\n

Well, bingo! This is actually the same value returned by score[1] above...

\n

To make a long story short: since you (erroneously) request metrics=['accuracy'] in your model compilation, Keras will do its best to satisfy you, and will return some "accuracy" indeed, calculated as shown above, despite this being completely meaningless in your setting.

\n
\n

There are quite a few settings where Keras, under the hood, performs rather meaningless operations without giving any hint or warning to the user; two of them I have happened to encounter are:

\n\n", "system": "" }, { "instruction": "Why should we normalize data for deep learning in Keras?", "input": "", "output": "

Normalization is a generic concept not limited only to deep learning or to Keras.

\n\n

Why to normalize?

\n\n

Let me take a simple logistic regression example which will be easy to understand and to explain normalization. \nAssume we are trying to predict if a customer should be given loan or not. Among many available independent variables lets just consider Age and Income. \nLet the equation be of the form:

\n\n
Y = weight_1 * (Age) + weight_2 * (Income) + some_constant\n
\n\n

Just for sake of explanation let Age be usually in range of [0,120] and let us assume Income in range of [10000, 100000]. The scale of Age and Income are very different. If you consider them as is then weights weight_1 and weight_2 may be assigned biased weights. weight_2 might bring more importance to Income as a feature than to what weight_1 brings importance to Age. To scale them to a common level, we can normalize them. For example, we can bring all the ages in range of [0,1] and all incomes in range of [0,1]. Now we can say that Age and Income are given equal importance as a feature.

\n\n

Does Normalization always increase the accuracy?

\n\n

Apparently, No. It is not necessary that normalization always increases accuracy. It may or might not, you never really know until you implement. Again it depends on at which stage in you training you apply normalization, on whether you apply normalization after every activation, etc.

\n\n

As the range of the values of the features gets narrowed down to a particular range because of normalization, its easy to perform computations over a smaller range of values. So, usually the model gets trained a bit faster.

\n\n

Regarding the number of epochs, accuracy usually increases with number of epochs provided that your model doesn't start over-fitting.

\n\n
\n\n

A very good explanation for Normalization/Standardization and related terms is here.

\n", "system": "" }, { "instruction": "What is the difference between performing upsampling together with strided transpose convolution and transpose convolution with stride 1 only?", "input": "", "output": "

Here and here you can find a really nice explanation of how transposed convolutions work. To sum up both of these approaches:

\n\n
    \n
  1. In your first approach, you are first upsampling your feature map:

    \n\n
    [[1, 2], [3, 4]] -> [[1, 1, 2, 2], [1, 1, 2, 2], [3, 3, 4, 4], [3, 3, 4, 4]]\n
    \n\n

    and then you apply a classical convolution (as Conv2DTranspose with stride=1 and padding='same' is equivalent to Conv2D).

  2. \n
  3. In your second approach you are first un(max)pooling your feature map:

    \n\n
    [[1, 2], [3, 4]] -> [[1, 0, 2, 0], [0, 0, 0, 0], [3, 0, 4, 0], [0, 0, 0, 0]]\n
    \n\n

    and then apply a classical convolution with filter_size, filters`, etc.

    \n\n

    \"enter

  4. \n
\n\n

Fun fact is that - although these approaches are different they share something in common. Transpose convolution is meant to be the approximation of gradient of convolution, so the first approach is approximating sum pooling whereas second max pooling gradient. This makes the first results to produce slightly smoother results.

\n\n

Other reasons why you might see the first approach are:

\n\n\n", "system": "" }, { "instruction": "How can I assign a class_weight in Keras in a simple way?", "input": "", "output": "

The class_weight parameter of the fit() function is a dictionary mapping classes to a weight value.

\n\n

Lets say you have 500 samples of class 0 and 1500 samples of class 1 than you feed in class_weight = {0:3 , 1:1}. That gives class 0 three times the weight of class 1.

\n\n

train_generator.classes gives you the proper class names for your weighting.

\n\n

If you want to calculate this programmatically you can use scikit-learn\u00b4s sklearn.utils.compute_class_weight().

\n\n

The function looks at the distribution of labels and produces weights to equally penalize under or over-represented classes in the training set.

\n\n

See also this useful thread here: https://github.com/fchollet/keras/issues/1875

\n\n

And this thread might also be of help: Is it possible to automatically infer the class_weight from flow_from_directory in Keras?

\n", "system": "" }, { "instruction": "How can I convert a trained Tensorflow model to Keras?", "input": "", "output": "

I think the callback in keras is also a solution.

\n\n

The ckpt file can be saved by TF with:

\n\n
saver = tf.train.Saver()\nsaver.save(sess, checkpoint_name)\n
\n\n

and to load checkpoint in Keras, you need a callback class as follow:

\n\n
class RestoreCkptCallback(keras.callbacks.Callback):\n    def __init__(self, pretrained_file):\n        self.pretrained_file = pretrained_file\n        self.sess = keras.backend.get_session()\n        self.saver = tf.train.Saver()\n    def on_train_begin(self, logs=None):\n        if self.pretrian_model_path:\n            self.saver.restore(self.sess, self.pretrian_model_path)\n            print('load weights: OK.')\n
\n\n

Then in your keras script:

\n\n
 model.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n restore_ckpt_callback = RestoreCkptCallback(pretrian_model_path='./XXXX.ckpt') \n model.fit(x_train, y_train, batch_size=128, epochs=20, callbacks=[restore_ckpt_callback])\n
\n\n

That will be fine. \nI think it is easy to implement and hope it helps.

\n", "system": "" }, { "instruction": "Make predictions using a tensorflow graph from a keras model", "input": "", "output": "

@frankyjuang linked me to here

\n\n

https://github.com/amir-abdi/keras_to_tensorflow

\n\n

and combining this with code from

\n\n

https://github.com/metaflow-ai/blog/blob/master/tf-freeze/load.py

\n\n

and

\n\n

https://github.com/tensorflow/tensorflow/issues/675

\n\n

I have found a solution to both predicting using a tf graph and creating the jacobian function:

\n\n
import tensorflow as tf\nimport numpy as np\n\n# Create function to convert saved keras model to tensorflow graph\ndef convert_to_pb(weight_file,input_fld='',output_fld=''):\n\n    import os\n    import os.path as osp\n    from tensorflow.python.framework import graph_util\n    from tensorflow.python.framework import graph_io\n    from keras.models import load_model\n    from keras import backend as K\n\n\n    # weight_file is a .h5 keras model file\n    output_node_names_of_input_network = [\"pred0\"] \n    output_node_names_of_final_network = 'output_node'\n\n    # change filename to a .pb tensorflow file\n    output_graph_name = weight_file[:-2]+'pb'\n    weight_file_path = osp.join(input_fld, weight_file)\n\n    net_model = load_model(weight_file_path)\n\n    num_output = len(output_node_names_of_input_network)\n    pred = [None]*num_output\n    pred_node_names = [None]*num_output\n\n    for i in range(num_output):\n        pred_node_names[i] = output_node_names_of_final_network+str(i)\n        pred[i] = tf.identity(net_model.output[i], name=pred_node_names[i])\n\n    sess = K.get_session()\n\n    constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), pred_node_names)\n    graph_io.write_graph(constant_graph, output_fld, output_graph_name, as_text=False)\n    print('saved the constant graph (ready for inference) at: ', osp.join(output_fld, output_graph_name))\n\n    return output_fld+output_graph_name\n
\n\n

Call:

\n\n
tf_model_path = convert_to_pb('model_file.h5','/model_dir/','/model_dir/')\n
\n\n

Create function to load the tf model as a graph:

\n\n
def load_graph(frozen_graph_filename):\n    # We load the protobuf file from the disk and parse it to retrieve the \n    # unserialized graph_def\n    with tf.gfile.GFile(frozen_graph_filename, \"rb\") as f:\n        graph_def = tf.GraphDef()\n        graph_def.ParseFromString(f.read())\n\n    # Then, we can use again a convenient built-in function to import a graph_def into the \n    # current default Graph\n    with tf.Graph().as_default() as graph:\n        tf.import_graph_def(\n            graph_def, \n            input_map=None, \n            return_elements=None, \n            name=\"prefix\", \n            op_dict=None, \n            producer_op_list=None\n        )\n\n    input_name = graph.get_operations()[0].name+':0'\n    output_name = graph.get_operations()[-1].name+':0'\n\n    return graph, input_name, output_name\n
\n\n

Create a function to make model predictions using the tf graph

\n\n
def predict(model_path, input_data):\n    # load tf graph\n    tf_model,tf_input,tf_output = load_graph(model_path)\n\n    # Create tensors for model input and output\n    x = tf_model.get_tensor_by_name(tf_input)\n    y = tf_model.get_tensor_by_name(tf_output) \n\n    # Number of model outputs\n    num_outputs = y.shape.as_list()[0]\n    predictions = np.zeros((input_data.shape[0],num_outputs))\n    for i in range(input_data.shape[0]):        \n        with tf.Session(graph=tf_model) as sess:\n            y_out = sess.run(y, feed_dict={x: input_data[i:i+1]})\n            predictions[i] = y_out\n\n    return predictions\n
\n\n

Make predictions:

\n\n
tf_predictions = predict(tf_model_path,test_data)\n
\n\n

Jacobian function:

\n\n
def compute_jacobian(model_path,input_data):\n\n    tf_model,tf_input,tf_output = load_graph(model_path)    \n\n    x = tf_model.get_tensor_by_name(tf_input)\n    y = tf_model.get_tensor_by_name(tf_output)\n    y_list = tf.unstack(y)\n    num_outputs = y.shape.as_list()[0]\n    jacobian = np.zeros((num_outputs,input_data.shape[0],input_data.shape[1]))\n    for i in range(input_data.shape[0]):\n        with tf.Session(graph=tf_model) as sess:\n            y_out = sess.run([tf.gradients(y_, x)[0] for y_ in y_list], feed_dict={x: input_data[i:i+1]})\n            jac_temp = np.asarray(y_out)\n        jacobian[:,i:i+1,:]=jac_temp[:,:,:,0]\n    return jacobian\n
\n\n

Compute Jacobian Matrix:

\n\n
jacobians = compute_jacobian(tf_model_path,test_data)\n
\n", "system": "" }, { "instruction": "How to display custom images in TensorBoard using Keras?", "input": "", "output": "

So, the following solution works well for me:

\n\n
import tensorflow as tf\n\ndef make_image(tensor):\n    \"\"\"\n    Convert an numpy representation image to Image protobuf.\n    Copied from https://github.com/lanpa/tensorboard-pytorch/\n    \"\"\"\n    from PIL import Image\n    height, width, channel = tensor.shape\n    image = Image.fromarray(tensor)\n    import io\n    output = io.BytesIO()\n    image.save(output, format='PNG')\n    image_string = output.getvalue()\n    output.close()\n    return tf.Summary.Image(height=height,\n                         width=width,\n                         colorspace=channel,\n                         encoded_image_string=image_string)\n\nclass TensorBoardImage(keras.callbacks.Callback):\n    def __init__(self, tag):\n        super().__init__() \n        self.tag = tag\n\n    def on_epoch_end(self, epoch, logs={}):\n        # Load image\n        img = data.astronaut()\n        # Do something to the image\n        img = (255 * skimage.util.random_noise(img)).astype('uint8')\n\n        image = make_image(img)\n        summary = tf.Summary(value=[tf.Summary.Value(tag=self.tag, image=image)])\n        writer = tf.summary.FileWriter('./logs')\n        writer.add_summary(summary, epoch)\n        writer.close()\n\n        return\n\ntbi_callback = TensorBoardImage('Image Example')\n
\n\n

Just pass the callback to fit or fit_generator.

\n\n

Note that you can also run some operations using the model inside the callback. For example, you may run the model on some images to check its performance.

\n\n

\"screen\"

\n", "system": "" }, { "instruction": "Theano with Keras on Raspberry Pi", "input": "", "output": "

If you had provided the version of python it would have been useful. If you are using python3.7 try reverting back to python3.6 because keras has not yet caught up to the development and there are a lot of problems installing tensorflow with keras on python3.7. I am putting emphasis on version here because I recently faced same problem installing using conda and I realised the issue was python version.

\n\n

But I also had problems getting tensorflow to work on PI. But I used direct installation using pip from ubuntu and not miniconda and it worked. The way that Google Tensorflow team itself mentions is best is to actually build tensorflow from source by following instructions from this link.\nhttps://www.tensorflow.org/install/source_rpi

\n\n

So try to downgrade the version of python to 3.6 or less if you can and try to install using pip or build from source using python3.6 or 3.7.

\n", "system": "" }, { "instruction": "Data Augmentation Image Data Generator Keras Semantic Segmentation", "input": "", "output": "

Yes you can. Here's an example from Keras's docs. You zip together two generators seeded with the same seeds and the fit_generator them.\nhttps://keras.io/preprocessing/image/

\n\n
# we create two instances with the same arguments \ndata_gen_args = dict(featurewise_center=True,\n                     featurewise_std_normalization=True,\n                     rotation_range=90.,\n                     width_shift_range=0.1,\n                     height_shift_range=0.1,\n                     zoom_range=0.2) \nimage_datagen = ImageDataGenerator(**data_gen_args) \nmask_datagen = ImageDataGenerator(**data_gen_args)\n\n# Provide the same seed and keyword arguments to the fit and flow methods seed = 1 \nimage_datagen.fit(images, augment=True, seed=seed) \nmask_datagen.fit(masks, augment=True, seed=seed)\n\nimage_generator = image_datagen.flow_from_directory(\n    'data/images',\n    class_mode=None,\n    seed=seed)\n\nmask_generator = mask_datagen.flow_from_directory(\n    'data/masks',\n    class_mode=None,\n    seed=seed)\n\n# combine generators into one which yields image and masks \ntrain_generator = zip(image_generator, mask_generator)\n\nmodel.fit_generator(\n    train_generator,\n    samples_per_epoch=2000,\n    nb_epoch=50)\n
\n", "system": "" }, { "instruction": "Keras, sparse matrix issue", "input": "", "output": "

Here is my solution.

\n\n
def batch_generator(X, y, batch_size):\n    number_of_batches = samples_per_epoch/batch_size\n    counter=0\n    shuffle_index = np.arange(np.shape(y)[0])\n    np.random.shuffle(shuffle_index)\n    X =  X[shuffle_index, :]\n    y =  y[shuffle_index]\n    while 1:\n        index_batch = shuffle_index[batch_size*counter:batch_size*(counter+1)]\n        X_batch = X[index_batch,:].todense()\n        y_batch = y[index_batch]\n        counter += 1\n        yield(np.array(X_batch),y_batch)\n        if (counter < number_of_batches):\n            np.random.shuffle(shuffle_index)\n            counter=0\n
\n\n

In my case, X - sparse matrix, y - array.

\n", "system": "" }, { "instruction": "module 'keras.engine' has no attribute 'Layer'", "input": "", "output": "

For lines where you are using Layers like ProposalLayer(KE.Layer)

\n

Instead of using KE.Layer do

\n
import keras.layers as KL\n
\n

and replace all instances of KE by KL

\n", "system": "" }, { "instruction": "Should the custom loss function in Keras return a single loss value for the batch or an arrary of losses for every sample in the training batch?", "input": "", "output": "

Actually, as far as I know, the shape of return value of the loss function is not important, i.e. it could be a scalar tensor or a tensor of one or multiple values per sample. The important thing is how it should be reduced to a scalar value so that it could be used in optimization process or shown to the user. For that, you can check the reduction types in Reduction documentation.

\n

Further, here is what the compile method documentation says about the loss argument, partially addressing this point:

\n
\n

loss: String (name of objective function), objective function or tf.keras.losses.Loss instance. See tf.keras.losses. An objective function is any callable with the signature loss = fn(y_true,y_pred), where y_true = ground truth values with shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]. y_pred = predicted values with shape = [batch_size, d0, .. dN]. It returns a weighted loss float tensor. If a custom Loss instance is used and reduction is set to NONE, return value has the shape [batch_size, d0, .. dN-1] ie. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses.

\n
\n

In addition, it's worth noting that most of the built-in loss functions in TF/Keras are usually reduced over the last dimension (i.e. axis=-1).

\n
\n

For those who doubt that a custom loss function which returns a scalar value would work: you can run the following snippet and you will see that the model would train and converge properly.

\n
import tensorflow as tf\nimport numpy as np\n\ndef custom_loss(y_true, y_pred):\n    return tf.reduce_sum(tf.square(y_true - y_pred))\n\ninp = tf.keras.layers.Input(shape=(3,))\nout = tf.keras.layers.Dense(3)(inp)\n\nmodel = tf.keras.Model(inp, out)\nmodel.compile(loss=custom_loss, optimizer=tf.keras.optimizers.Adam(lr=0.1))\n\nx = np.random.rand(1000, 3)\ny = x * 10 + 2.5\nmodel.fit(x, y, epochs=20)\n
\n", "system": "" }, { "instruction": "Why neural network predicts wrong on its own training data?", "input": "", "output": "

The OP postulates an interesting finding. Let me simplify the original question as follows.

\n\n

If the model is trained on a particular time series, why can't the model reconstruct previous time series data, which it was already trained on?

\n\n

Well, the answer is embedded in the training progress itself. Since EarlyStopping is used here to avoid overfitting, the best model is saved at epoch=5, where val_loss=0.0030 as mentioned by the OP. At this instance, the training loss is equal to 0.0343, that is, the RMSE of training is 0.185. Since the dataset is scaled using MinMaxScalar, we need to undo the scaling of RMSE to understand what's going on.

\n\n

The minimum and maximum values of the time sequence are found to be 2290 and 3380. Therefore, having 0.185 as the RMSE of training means that, even for the training set, the predicted values may differ from the ground truth values by approximately 0.185*(3380-2290), that is ~200 units on average.

\n\n

This explains why there is a big difference when predicting the training data itself at a previous time step.

\n\n

What should I do to perfectly emulate training data?

\n\n

I asked this question from myself. The simple answer is, make the training loss approaching 0, that is overfit the model.

\n\n

After some training, I realized that a model with only 1 LSTM layer that has 32 cells is not complex enough to reconstruct the training data. Therefore, I have added another LSTM layer as follows.

\n\n
model = Sequential()\nmodel.add(LSTM(32, return_sequences=True, activation = 'sigmoid', input_shape=(x_train.shape[1], x_train.shape[2])))\n# model.add(Dropout(0.2))\n# model.add(BatchNormalization())\nmodel.add(LSTM(units = 64, return_sequences=False,))\nmodel.add(Dense(y_train.shape[1]))\nmodel.compile(optimizer = 'adam', loss = 'mse')\n
\n\n

And the model is trained for 1000 epochs without considering EarlyStopping.

\n\n
model.fit(x_train, y_train, batch_size = 64, epochs = 1000, shuffle = True, validation_data = (x_test, y_test))\n
\n\n

At the end of 1000th epoch we have a training loss of 0.00047 which is much lower than the training loss in your case. So we would expect the model to reconstruct the training data better. Following is the prediction plot for Apr 2-8.

\n\n

\"prediction\"

\n\n

A Final Note:

\n\n

Training on a particular database does not necessarily mean that the model should be able to perfectly reconstruct the training data. Especially, when the methods such as early stopping, regularization and dropout are introduced to avoid overfitting, the model tends to be more generalizable rather than memorizing training data.

\n", "system": "" }, { "instruction": "confusion matrix error "Classification metrics can't handle a mix of multilabel-indicator and multiclass targets"", "input": "", "output": "

Confusion matrix needs both labels & predictions as single-digits, not as one-hot encoded vectors; although you have done this with your predictions using model.predict_classes(), i.e.

\n\n\n\n
rounded_predictions = model.predict_classes(test_images, batch_size=128, verbose=0)\nrounded_predictions[1]\n# 2\n
\n\n

your test_labels are still one-hot encoded:

\n\n
test_labels[1]\n# array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)\n
\n\n

So, you should convert them too to single-digit ones, as follows:

\n\n
import numpy as np\nrounded_labels=np.argmax(test_labels, axis=1)\nrounded_labels[1]\n# 2\n
\n\n

After which, the confusion matrix should come up OK:

\n\n
from sklearn.metrics import confusion_matrix\ncm = confusion_matrix(rounded_labels, rounded_predictions)\ncm\n# result:\narray([[ 971,    0,    0,    2,    1,    0,    2,    1,    3,    0],\n       [   0, 1121,    2,    1,    0,    1,    3,    0,    7,    0],\n       [   5,    4,  990,    7,    5,    3,    2,    7,    9,    0],\n       [   0,    0,    0,  992,    0,    2,    0,    7,    7,    2],\n       [   2,    0,    2,    0,  956,    0,    3,    3,    2,   14],\n       [   3,    0,    0,   10,    1,  872,    3,    0,    1,    2],\n       [   5,    3,    1,    1,    9,   10,  926,    0,    3,    0],\n       [   0,    7,   10,    1,    0,    2,    0,  997,    1,   10],\n       [   5,    0,    3,    7,    5,    7,    3,    4,  937,    3],\n       [   5,    5,    0,    9,   10,    3,    0,    8,    3,  966]])\n
\n", "system": "" }, { "instruction": "What is the difference between "predict" and "predict_class" functions in keras?", "input": "", "output": "

predict will return the scores of the regression and predict_class will return the class of your prediction. Although it seems similar, there are some differences:

\n

Imagine you are trying to predict if the picture is a dog or a cat (you have a classifier):

\n\n

Now, imagine you are trying to predict house prices (you have a regressor):

\n\n

TL:DR: use predict_class for classifiers (outputs are labels) and use predict for regressions (outputs are non-discrete)

\n

Hope it helps!

\n

For your second question, the answer is here

\n", "system": "" }, { "instruction": "How training and test data is split - Keras on Tensorflow", "input": "", "output": "
    \n
  1. The keras documentation says:"The validation data is selected from the last samples in the x and y data provided, before shuffling.", this means that the shuffle occurs after the split, there is also a boolean parameter called "shuffle" which is set true as default, so if you don't want your data to be shuffled you could just set it to false

    \n
  2. \n
  3. Getting good results on your training data and then getting bad or not so good results on your evaluation data usually means that your model is overfitting, overfit is when your model learns in a very specific scenario and can't achieve good results on new data

    \n
  4. \n
  5. evaluation is to test your model on new data that it has "never seen before", usually you divide your data on training and test, but sometimes you might also want to create a third group of data, because if you just adjust your model to obtain better and better results on your test data this in some way is like cheating because in some way you are telling your model how is the data you are going to use for evaluation and this could cause overfitting

    \n
  6. \n
\n

Also, if you want to split your data without using keras, I recommend you to use the sklearn train_test_split() function.

\n

it's easy to use and it looks like this:

\n
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)\n
\n", "system": "" }, { "instruction": "Keras AttributeError: 'list' object has no attribute 'ndim'", "input": "", "output": "

model.fit expects x and y to be numpy array. Seems like you pass a list, it tried to get shape of input by reading ndim attribute of numpy array and failed.

\n\n

You can simply transform it using np.array:

\n\n
import numpy as np\n...\nmodel.fit(np.array(train_X),np.array(train_Y), epochs=20, batch_size=10)\n
\n", "system": "" }, { "instruction": "How can I get a Keras models' history after loading it from a file in Python?", "input": "", "output": "

Unfortunately it seems that Keras hasn't implemented the possibility of loading the history directly from a loaded model. Instead you have to set it up in advance. This is how I solved it using CSVLogger (it's actually very convenient storing the entire training history in a separate file. This way you can always come back later and plot whatever history you want instead of being dependent on a variable you can easily lose stored in the RAM):

\n

First we have to set up the logger before initiating the training.

\n
from keras.callbacks import CSVLogger\n\ncsv_logger = CSVLogger('training.log', separator=',', append=False)\nmodel.fit(X_train, Y_train, callbacks=[csv_logger])\n
\n

The entire log history will now be stored in the file 'training.log' (the same information you would get, by in your case, calling H.history). When the training is finished, the next step would simply be to load the data stored in this file. You can do that with pandas read_csv:

\n
import pandas as pd\nlog_data = pd.read_csv('training.log', sep=',', engine='python')\n
\n

From here on you can treat the data stored in log_data just as you would by loading it from K.history.

\n

More information in Keras callbacks docs.

\n", "system": "" }, { "instruction": "How can I get the number of trainable parameters of a model in Keras?", "input": "", "output": "
from keras import backend as K\n\ntrainable_count = int(\n    np.sum([K.count_params(p) for p in set(model.trainable_weights)]))\nnon_trainable_count = int(\n    np.sum([K.count_params(p) for p in set(model.non_trainable_weights)]))\n\nprint('Total params: {:,}'.format(trainable_count + non_trainable_count))\nprint('Trainable params: {:,}'.format(trainable_count))\nprint('Non-trainable params: {:,}'.format(non_trainable_count))\n
\n\n

The above snippet can be discovered in the end of layer_utils.print_summary() definition, which summary() is calling.

\n\n
\n\n

Edit: more recent version of Keras has a helper function count_params() for this purpose:

\n\n
from keras.utils.layer_utils import count_params\n\ntrainable_count = count_params(model.trainable_weights)\nnon_trainable_count = count_params(model.non_trainable_weights)\n
\n", "system": "" }, { "instruction": "Keras Masking for RNN with Varying Time Steps", "input": "", "output": "

The way you implemented masking should be correct. If you have data with the shape (samples, timesteps, features), and you want to mask timesteps lacking data with a zero mask of the same size as the features argument, then you add Masking(mask_value=0., input_shape=(timesteps, features)). See here: keras.io/layers/core/#masking

\n\n

Your model could potentially be too simple, and/or your number of epochs could be insufficient for the model to differentiate between all of your classes. Try this model:

\n\n
model = Sequential()\nmodel.add(Masking(mask_value=0., input_shape=(max_time, 24)))\nmodel.add(LSTM(256, input_dim=24))\nmodel.add(Dense(1024))\nmodel.add(Dense(2))\nmodel.add(Activation(activate))\nmodel.compile(loss=weibull_loglik_discrete, optimizer=RMSprop(lr=.01))\nmodel.fit(train_x, train_y, nb_epoch=100, batch_size=1000, verbose=2, validation_data=(test_x, test_y)) \n
\n\n

If that does not work, try doubling the epochs a few times (e.g. 200, 400) and see if that improves the results.

\n", "system": "" }, { "instruction": "Using pre-trained word2vec with LSTM for word generation", "input": "", "output": "\n

I've created a gist with a simple generator that builds on top of your initial idea: it's an LSTM network wired to the pre-trained word2vec embeddings, trained to predict the next word in a sentence. The data is the list of abstracts from arXiv website.

\n

I'll highlight the most important parts here.

\n

Gensim Word2Vec

\n

Your code is fine, except for the number of iterations to train it. The default iter=5 seems rather low. Besides, it's definitely not the bottleneck -- LSTM training takes much longer. iter=100 looks better.

\n
word_model = gensim.models.Word2Vec(sentences, vector_size=100, min_count=1, \n                                    window=5, iter=100)\npretrained_weights = word_model.wv.syn0\nvocab_size, emdedding_size = pretrained_weights.shape\nprint('Result embedding shape:', pretrained_weights.shape)\nprint('Checking similar words:')\nfor word in ['model', 'network', 'train', 'learn']:\n  most_similar = ', '.join('%s (%.2f)' % (similar, dist) \n                           for similar, dist in word_model.most_similar(word)[:8])\n  print('  %s -> %s' % (word, most_similar))\n\ndef word2idx(word):\n  return word_model.wv.vocab[word].index\ndef idx2word(idx):\n  return word_model.wv.index2word[idx]\n
\n

The result embedding matrix is saved into pretrained_weights array which has a shape (vocab_size, emdedding_size).

\n

Keras model

\n

Your code is almost correct, except for the loss function. Since the model predicts the next word, it's a classification task, hence the loss should be categorical_crossentropy or sparse_categorical_crossentropy. I've chosen the latter for efficiency reasons: this way it avoids one-hot encoding, which is pretty expensive for a big vocabulary.

\n
model = Sequential()\nmodel.add(Embedding(input_dim=vocab_size, output_dim=emdedding_size, \n                    weights=[pretrained_weights]))\nmodel.add(LSTM(units=emdedding_size))\nmodel.add(Dense(units=vocab_size))\nmodel.add(Activation('softmax'))\nmodel.compile(optimizer='adam', loss='sparse_categorical_crossentropy')\n
\n

Note passing the pre-trained weights to weights.

\n

Data preparation

\n

In order to work with sparse_categorical_crossentropy loss, both sentences and labels must be word indices. Short sentences must be padded with zeros to the common length.

\n
train_x = np.zeros([len(sentences), max_sentence_len], dtype=np.int32)\ntrain_y = np.zeros([len(sentences)], dtype=np.int32)\nfor i, sentence in enumerate(sentences):\n  for t, word in enumerate(sentence[:-1]):\n    train_x[i, t] = word2idx(word)\n  train_y[i] = word2idx(sentence[-1])\n
\n

Sample generation

\n

This is pretty straight-forward: the model outputs the vector of probabilities, of which the next word is sampled and appended to the input. Note that the generated text would be better and more diverse if the next word is sampled, rather than picked as argmax. The temperature based random sampling I've used is described here.

\n
def sample(preds, temperature=1.0):\n  if temperature <= 0:\n    return np.argmax(preds)\n  preds = np.asarray(preds).astype('float64')\n  preds = np.log(preds) / temperature\n  exp_preds = np.exp(preds)\n  preds = exp_preds / np.sum(exp_preds)\n  probas = np.random.multinomial(1, preds, 1)\n  return np.argmax(probas)\n\ndef generate_next(text, num_generated=10):\n  word_idxs = [word2idx(word) for word in text.lower().split()]\n  for i in range(num_generated):\n    prediction = model.predict(x=np.array(word_idxs))\n    idx = sample(prediction[-1], temperature=0.7)\n    word_idxs.append(idx)\n  return ' '.join(idx2word(idx) for idx in word_idxs)\n
\n

Examples of generated text

\n
deep convolutional... -> deep convolutional arithmetic initialization step unbiased effectiveness\nsimple and effective... -> simple and effective family of variables preventing compute automatically\na nonconvex... -> a nonconvex technique compared layer converges so independent onehidden markov\na... -> a function parameterization necessary both both intuitions with technique valpola utilizes\n
\n

Doesn't make too much sense, but is able to produce sentences that look at least grammatically sound (sometimes).

\n

The link to the complete runnable script.

\n", "system": "" }, { "instruction": "How to convert predicted sequence back to text in keras?", "input": "", "output": "

You can use directly the inverse tokenizer.sequences_to_texts function.

\n
    text = tokenizer.sequences_to_texts(<list_of_integer_equivalent_encodings>)\n
\n

I have tested the above and it works as expected.

\n

PS.: Take extra care to make the argument be the list of the integer encodings and not the One Hot ones.

\n", "system": "" }, { "instruction": "Resizing images in Keras ImageDataGenerator flow methods", "input": "", "output": "

flow_from_directory(directory) generates augmented images from directory with arbitrary collection of images. So there is need of parameter target_size to make all images of same shape.

\n\n

While flow(X, y) augments images which are already stored in a sequence in X which is nothing but numpy matrix and can be easily preprocessed/resized before passing to flow. So no need for target_size parameter. As for resizing I prefer using scipy.misc.imresize over PIL.Image resize, or cv2.resize as it can operate on numpy image data.

\n\n
import scipy\nnew_shape = (28,28,3)\nX_train_new = np.empty(shape=(X_train.shape[0],)+new_shape)\nfor idx in xrange(X_train.shape[0]):\n    X_train_new[idx] = scipy.misc.imresize(X_train[idx], new_shape)\n
\n", "system": "" }, { "instruction": "HOW TO FIX IT? AttributeError: module 'keras.preprocessing.image' has no attribute 'load_img'", "input": "", "output": "

Replace:

\n
from keras.preprocessing import image\n
\n

for:

\n
import keras.utils as image\n
\n", "system": "" }, { "instruction": "Understanding `width_shift_range` and `height_shift_range` arguments in Keras's ImageDataGenerator class", "input": "", "output": "

These two argument used by ImageDataGenerator class Which use to preprocess image before feeding it into network. If you want to make your model more robust then small amount of data is not enough. That is where data augmentation come in handy. This are used to generate random data.

\n

width_shift_range: It actually shift the image to the left or right(horizontal shifts). If the value is float and <=1 it will take the percentage of total width as range. Suppose image width is 100px. if width_shift_range = 1.0 it will take -100% to +100% means -100px to +100px. It will shift image randomly between this range. Randomly selected positive value will shift the image to the right side and negative value will shift the image to the left side. We can also do this by selecting pixels.\nif we set width_shift_range = 100 it will have the same effect. More importantly integer value>=1 count pixel as range and float value<=1 count percentage of total width as range. Below images are for width_shift_range = 1.0.

\n

\"For

\n

height_shift_range: It works same as width_shift_range but shift vertically(up or down). Below images are for height_shift_range=0.2,fill_mode="constant"

\n

\"enter

\n

fill_mode: It sets rules for newly shifted pixel in the input area.

\n
## fill_mode: One of {"constant", "nearest", "reflect" or "wrap"}. \n## Points outside the boundaries of the input are filled according to the given mode:\n## "constant": kkkkkkkk|abcd|kkkkkkkk (cval=k)\n## "nearest":  aaaaaaaa|abcd|dddddddd\n## "reflect":  abcddcba|abcd|dcbaabcd\n## "wrap":  abcdabcd|abcd|abcdabcd\n
\n

For more you can check this blog

\n", "system": "" }, { "instruction": "Keras - Validation Loss and Accuracy stuck at 0", "input": "", "output": "\n

Here is a demonstration:

\n
model.fit(X_train, y_train, validation_data=[X_train.to_numpy(), y_train.to_numpy()], \nepochs=10, batch_size=64)\n\nEpoch 1/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.7898 - accuracy: 0.6087 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 2/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.6710 - accuracy: 0.6500 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 3/10\n8/8 [==============================] - 0s 5ms/step - loss: 0.6748 - accuracy: 0.6500 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 4/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.6716 - accuracy: 0.6370 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 5/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.6085 - accuracy: 0.6326 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 6/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.6744 - accuracy: 0.6326 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 7/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.6102 - accuracy: 0.6522 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 8/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.7032 - accuracy: 0.6109 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 9/10\n8/8 [==============================] - 0s 5ms/step - loss: 0.6283 - accuracy: 0.6717 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 10/10\n8/8 [==============================] - 0s 5ms/step - loss: 0.6120 - accuracy: 0.6652 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\n
\n

So, definitely there is some issue with tensorflow implementation of fit.

\n

I dug up the source, and it seems the part responsible for validation_data:

\n
...\n...\n        # Run validation.\n        if validation_data and self._should_eval(epoch, validation_freq):\n          val_x, val_y, val_sample_weight = (\n              data_adapter.unpack_x_y_sample_weight(validation_data))\n          val_logs = self.evaluate(\n              x=val_x,\n              y=val_y,\n              sample_weight=val_sample_weight,\n              batch_size=validation_batch_size or batch_size,\n              steps=validation_steps,\n              callbacks=callbacks,\n              max_queue_size=max_queue_size,\n              workers=workers,\n              use_multiprocessing=use_multiprocessing,\n              return_dict=True)\n          val_logs = {'val_' + name: val for name, val in val_logs.items()}\n          epoch_logs.update(val_logs)\n
\n

internally calls model.evaluate, as we have already established evaluate works fine, I realized the only culprit could be unpack_x_y_sample_weight.

\n

So, I looked into the implementation:

\n
def unpack_x_y_sample_weight(data):\n  """Unpacks user-provided data tuple."""\n  if not isinstance(data, tuple):\n    return (data, None, None)\n  elif len(data) == 1:\n    return (data[0], None, None)\n  elif len(data) == 2:\n    return (data[0], data[1], None)\n  elif len(data) == 3:\n    return (data[0], data[1], data[2])\n\n  raise ValueError("Data not understood.")\n\n
\n

It's crazy, but if you just pass a tuple instead of a list, everything works fine due to the check inside unpack_x_y_sample_weight. (Your labels are missing after this step and somehow the data is getting fixed inside evaluate, so you're training with no reasonable labels, this seems like a bug but the documentation clearly states to pass tuple)

\n

The following code gives correct validation accuracy and loss:

\n
model.fit(X_train, y_train, validation_data=(X_train.to_numpy(), y_train.to_numpy()), \nepochs=10, batch_size=64)\n\nEpoch 1/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.5832 - accuracy: 0.6696 - val_loss: 0.6892 - val_accuracy: 0.6674\nEpoch 2/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.6385 - accuracy: 0.6804 - val_loss: 0.8984 - val_accuracy: 0.5565\nEpoch 3/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.6822 - accuracy: 0.6391 - val_loss: 0.6556 - val_accuracy: 0.6739\nEpoch 4/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.6276 - accuracy: 0.6609 - val_loss: 1.0691 - val_accuracy: 0.5630\nEpoch 5/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.7048 - accuracy: 0.6239 - val_loss: 0.6474 - val_accuracy: 0.6326\nEpoch 6/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.6545 - accuracy: 0.6500 - val_loss: 0.6659 - val_accuracy: 0.6043\nEpoch 7/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.5796 - accuracy: 0.6913 - val_loss: 0.6891 - val_accuracy: 0.6435\nEpoch 8/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.5915 - accuracy: 0.6891 - val_loss: 0.5307 - val_accuracy: 0.7152\nEpoch 9/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.5571 - accuracy: 0.7000 - val_loss: 0.5465 - val_accuracy: 0.6957\nEpoch 10/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.7133 - accuracy: 0.6283 - val_loss: 0.7046 - val_accuracy: 0.6413\n
\n

So, as this seems to be a bug, I have just opened a relevant issue at Tensorflow Github repo:

\n

https://github.com/tensorflow/tensorflow/issues/39370

\n", "system": "" }, { "instruction": "Applying callbacks in a custom training loop in Tensorflow 2.0", "input": "", "output": "

I've had this problem myself: (1) I want to use a custom training loop; (2) I don't want to lose the bells and whistles Keras gives me in terms of callbacks; (3) I don't want to re-implement them all myself. Tensorflow has a design philosophy of allowing a developer to gradually opt-in to its more low-level APIs. As @HyeonPhilYoun notes in his comment below, the official documentation for tf.keras.callbacks.Callback gives an example of what we're looking for.

\n

The following has worked for me, but can be improved by reverse engineering tf.keras.Model.

\n

The trick is to use tf.keras.callbacks.CallbackList and then manually trigger its lifecycle events from within your custom training loop. This example uses tqdm to give attractive progress bars, but CallbackList has a progress_bar initialization argument that can let you use the defaults. training_model is a typical instance of tf.keras.Model.

\n
from tqdm.notebook import tqdm, trange\n\n# Populate with typical keras callbacks\n_callbacks = []\n\ncallbacks = tf.keras.callbacks.CallbackList(\n    _callbacks, add_history=True, model=training_model)\n\nlogs = {}\ncallbacks.on_train_begin(logs=logs)\n\n# Presentation\nepochs = trange(\n    max_epochs,\n    desc="Epoch",\n    unit="Epoch",\n    postfix="loss = {loss:.4f}, accuracy = {accuracy:.4f}")\nepochs.set_postfix(loss=0, accuracy=0)\n\n# Get a stable test set so epoch results are comparable\ntest_batches = batches(test_x, test_Y)\n\nfor epoch in epochs:\n    callbacks.on_epoch_begin(epoch, logs=logs)\n\n    # I like to formulate new batches each epoch\n    # if there are data augmentation methods in play\n    training_batches = batches(x, Y)\n\n    # Presentation\n    enumerated_batches = tqdm(\n        enumerate(training_batches),\n        desc="Batch",\n        unit="batch",\n        postfix="loss = {loss:.4f}, accuracy = {accuracy:.4f}",\n        position=1,\n        leave=False)\n\n    for (batch, (x, y)) in enumerated_batches:\n        training_model.reset_states()\n        \n        callbacks.on_batch_begin(batch, logs=logs)\n        callbacks.on_train_batch_begin(batch, logs=logs)\n        \n        logs = training_model.train_on_batch(x=x, y=Y, return_dict=True)\n\n        callbacks.on_train_batch_end(batch, logs=logs)\n        callbacks.on_batch_end(batch, logs=logs)\n\n        # Presentation\n        enumerated_batches.set_postfix(\n            loss=float(logs["loss"]),\n            accuracy=float(logs["accuracy"]))\n\n    for (batch, (x, y)) in enumerate(test_batches):\n        training_model.reset_states()\n\n        callbacks.on_batch_begin(batch, logs=logs)\n        callbacks.on_test_batch_begin(batch, logs=logs)\n\n        logs = training_model.test_on_batch(x=x, y=Y, return_dict=True)\n\n        callbacks.on_test_batch_end(batch, logs=logs)\n        callbacks.on_batch_end(batch, logs=logs)\n\n    # Presentation\n    epochs.set_postfix(\n        loss=float(logs["loss"]),\n        accuracy=float(logs["accuracy"]))\n\n    callbacks.on_epoch_end(epoch, logs=logs)\n\n    # NOTE: This is a decent place to check on your early stopping\n    # callback.\n    # Example: use training_model.stop_training to check for early stopping\n\n\ncallbacks.on_train_end(logs=logs)\n\n# Fetch the history object we normally get from keras.fit\nhistory_object = None\nfor cb in callbacks:\n    if isinstance(cb, tf.keras.callbacks.History):\n        history_object = cb\nassert history_object is not None\n
\n", "system": "" }, { "instruction": "What is meant by sequential model in Keras", "input": "", "output": "

There are two ways to build Keras models: sequential and functional.

\n\n

The sequential API allows you to create models layer-by-layer for most problems. It is limited in that it does not allow you to create models that share layers or have multiple inputs or outputs.

\n\n

Alternatively, the functional API allows you to create models that have a lot more flexibility as you can easily define models where layers connect to more than just the previous and next layers. In fact, you can connect layers to (literally) any other layer. As a result, creating complex networks such as siamese networks and residual networks become possible.

\n\n

for more details visit : https://machinelearningmastery.com/keras-functional-api-deep-learning/

\n", "system": "" }, { "instruction": "Keras layer with int inputs cannot be built", "input": "", "output": "

The exception is thrown when building a model with model.build.

\n

model.build function build a model based on given input shape.

\n

The error is raised because when we trying to build a model, it first calls a model with x argument depending on input shape type in the following code

\n
if (isinstance(input_shape, list) and\n    all(d is None or isinstance(d, int) for d in input_shape)):\n  input_shape = tuple(input_shape)\nif isinstance(input_shape, list):\n  x = [base_layer_utils.generate_placeholders_from_shape(shape)\n        for shape in input_shape]\nelif isinstance(input_shape, dict):\n  x = {\n      k: base_layer_utils.generate_placeholders_from_shape(shape)\n      for k, shape in input_shape.items()\n  }\nelse:\n  x = base_layer_utils.generate_placeholders_from_shape(input_shape)\n
\n

x is a TensorFlow placeholder here. So when trying to call a model with x as an input it will pop a TypeError and the result except for block will work and give an error.

\n

I assume your input shape is 16x16. Instead of using self.build([(16,16)]) this, call the model based on real tensor

\n
inputs = tf.keras.Input(shape=(16,))\nself.call(inputs)\n\n
\n", "system": "" }, { "instruction": "Save and load model optimizer state", "input": "", "output": "

You can extract the important lines from the load_model and save_model functions.

\n\n

For saving optimizer states, in save_model:

\n\n\n\n
# Save optimizer weights.\nsymbolic_weights = getattr(model.optimizer, 'weights')\nif symbolic_weights:\n    optimizer_weights_group = f.create_group('optimizer_weights')\n    weight_values = K.batch_get_value(symbolic_weights)\n
\n\n

For loading optimizer states, in load_model:

\n\n
# Set optimizer weights.\nif 'optimizer_weights' in f:\n    # Build train function (to get weight updates).\n    if isinstance(model, Sequential):\n        model.model._make_train_function()\n    else:\n        model._make_train_function()\n\n    # ...\n\n    try:\n        model.optimizer.set_weights(optimizer_weight_values)\n
\n\n

Combining the lines above, here's an example:

\n\n
    \n
  1. First fit the model for 5 epochs.
  2. \n
\n\n\n\n
X, y = np.random.rand(100, 50), np.random.randint(2, size=100)\nx = Input((50,))\nout = Dense(1, activation='sigmoid')(x)\nmodel = Model(x, out)\nmodel.compile(optimizer='adam', loss='binary_crossentropy')\nmodel.fit(X, y, epochs=5)\n\nEpoch 1/5\n100/100 [==============================] - 0s 4ms/step - loss: 0.7716\nEpoch 2/5\n100/100 [==============================] - 0s 64us/step - loss: 0.7678\nEpoch 3/5\n100/100 [==============================] - 0s 82us/step - loss: 0.7665\nEpoch 4/5\n100/100 [==============================] - 0s 56us/step - loss: 0.7647\nEpoch 5/5\n100/100 [==============================] - 0s 76us/step - loss: 0.7638\n
\n\n
    \n
  1. Now save the weights and optimizer states.
  2. \n
\n\n\n\n
model.save_weights('weights.h5')\nsymbolic_weights = getattr(model.optimizer, 'weights')\nweight_values = K.batch_get_value(symbolic_weights)\nwith open('optimizer.pkl', 'wb') as f:\n    pickle.dump(weight_values, f)\n
\n\n
    \n
  1. Rebuild the model in another python session, and load weights.
  2. \n
\n\n\n\n
x = Input((50,))\nout = Dense(1, activation='sigmoid')(x)\nmodel = Model(x, out)\nmodel.compile(optimizer='adam', loss='binary_crossentropy')\n\nmodel.load_weights('weights.h5')\nmodel._make_train_function()\nwith open('optimizer.pkl', 'rb') as f:\n    weight_values = pickle.load(f)\nmodel.optimizer.set_weights(weight_values)\n
\n\n
    \n
  1. Continue model training.
  2. \n
\n\n\n\n
model.fit(X, y, epochs=5)\n\nEpoch 1/5\n100/100 [==============================] - 0s 674us/step - loss: 0.7629\nEpoch 2/5\n100/100 [==============================] - 0s 49us/step - loss: 0.7617\nEpoch 3/5\n100/100 [==============================] - 0s 49us/step - loss: 0.7611\nEpoch 4/5\n100/100 [==============================] - 0s 55us/step - loss: 0.7601\nEpoch 5/5\n100/100 [==============================] - 0s 49us/step - loss: 0.7594\n
\n", "system": "" }, { "instruction": "Keras: find out the number of layers", "input": "", "output": "

model.layers will give you the list of all layers. The number is consequently len(model.layers)

\n", "system": "" }, { "instruction": "Using multiple validation sets with keras", "input": "", "output": "

I ended up writing my own Callback based on the History callback to solve the problem. I'm not sure if this is the best approach but the following Callback records losses and metrics for the training and validation set like the History callback as well as losses and metrics for additional validation sets passed to the constructor.

\n
class AdditionalValidationSets(Callback):\n    def __init__(self, validation_sets, verbose=0, batch_size=None):\n        """\n        :param validation_sets:\n        a list of 3-tuples (validation_data, validation_targets, validation_set_name)\n        or 4-tuples (validation_data, validation_targets, sample_weights, validation_set_name)\n        :param verbose:\n        verbosity mode, 1 or 0\n        :param batch_size:\n        batch size to be used when evaluating on the additional datasets\n        """\n        super(AdditionalValidationSets, self).__init__()\n        self.validation_sets = validation_sets\n        for validation_set in self.validation_sets:\n            if len(validation_set) not in [3, 4]:\n                raise ValueError()\n        self.epoch = []\n        self.history = {}\n        self.verbose = verbose\n        self.batch_size = batch_size\n\n    def on_train_begin(self, logs=None):\n        self.epoch = []\n        self.history = {}\n\n    def on_epoch_end(self, epoch, logs=None):\n        logs = logs or {}\n        self.epoch.append(epoch)\n\n        # record the same values as History() as well\n        for k, v in logs.items():\n            self.history.setdefault(k, []).append(v)\n\n        # evaluate on the additional validation sets\n        for validation_set in self.validation_sets:\n            if len(validation_set) == 3:\n                validation_data, validation_targets, validation_set_name = validation_set\n                sample_weights = None\n            elif len(validation_set) == 4:\n                validation_data, validation_targets, sample_weights, validation_set_name = validation_set\n            else:\n                raise ValueError()\n\n            results = self.model.evaluate(x=validation_data,\n                                          y=validation_targets,\n                                          verbose=self.verbose,\n                                          sample_weight=sample_weights,\n                                          batch_size=self.batch_size)\n\n            for metric, result in zip(self.model.metrics_names,results):\n                valuename = validation_set_name + '_' + metric\n                self.history.setdefault(valuename, []).append(result)\n
\n

which i am then using like this:

\n
history = AdditionalValidationSets([(validation_data2, validation_targets2, 'val2')])\nmodel.fit(train_data, train_targets,\n          epochs=epochs,\n          batch_size=batch_size,\n          validation_data=(validation_data1, validation_targets1),\n          callbacks=[history]\n          shuffle=True)\n
\n", "system": "" }, { "instruction": "Limit number of cores used in Keras", "input": "", "output": "

As @Yu-Yang suggested, I used these lines before each fit:

\n
from keras import backend as K\nK.set_session(K.tf.Session(config=K.tf.ConfigProto(intra_op_parallelism_threads=32,\n                                                   inter_op_parallelism_threads=32)))\n
\n

Check the CPU usage (htop) :\n\"htop

\n", "system": "" }, { "instruction": "How to interpret Keras model.fit output?", "input": "", "output": "

ETA = Estimated Time of Arrival.

\n\n

80 is the size of your training set, 32/80 and 64/80 mean that your batch size is 32 and currently the first batch (or the second batch respectively) is being processed.

\n\n

loss and acc refer to the current loss and accuracy of the training set.\nAt the end of each epoch your trained NN is evaluated against your validation set. This is what val_loss and val_acc refer to.

\n\n

The history object returned by model.fit() is a simple class with some fields, e.g. a reference to the model, a params dict and, most importantly, a history dict. It stores the values of loss and acc (or any other used metric) at the end of each epoch. For 2 epochs it will look like this:

\n\n
{\n    'val_loss': [16.11809539794922, 14.12947562917035],\n    'val_acc': [0.0, 0.0],\n    'loss': [14.890108108520508, 12.088571548461914],\n    'acc': [0.0, 0.25]\n}\n
\n\n

This comes in very handy if you want to visualize your training progress.

\n\n

Note: if your validation loss/accuracy starts increasing while your training loss/accuracy is still decreasing, this is an indicator of overfitting.

\n\n

Note 2: at the very end you should test your NN against some test set that is different from you training set and validation set and thus has never been touched during the training process.

\n", "system": "" }, { "instruction": "Where can I find the API documentation of the class Input?", "input": "", "output": "

That documentation is really hard to go through when you're not used to Keras.

\n

But there are two approaches for building keras models:

\n\n

The Input layer is not used with the Sequential model, only with Model.

\n

Probably, there is no clear documentation because the Input layer does absolutely nothing except defining the shape of the input data to your model. (In fact it creates a "tensor" that you can use as input to other layers).

\n

Imagine you are creating a model taking batches with MNIST data, which has 28x28 pixel images. Your input shape is then (28,28) (see *).

\n

When creating your model, you use Input just to define that:

\n
#inp will be a tensor with shape (?, 28, 28)\ninp = Input((28,28))\n
\n

The following layers will then use this input:

\n
x = SomeKerasLayer(blablabla)(inp)     \nx = SomeOtherLayer(blablabla)(x)    \noutput = TheLastLayer(balblabla)(x)\n
\n

And when you create the model, you define the path that the data will follow, which in this case is from the input to the output:

\n
model = Model(inp,output)\n
\n
\n

With the Model api, it's also possible to create ramifications, multiple inputs and multiple outputs, branches, etc.

\n

In case of having multiple inputs, you'd create several Input layers.

\n

See here for more advanced examples with actual layers: https://keras.io/getting-started/functional-api-guide/

\n
\n

* - This is not a rule. Depending on how you format your input data, this shape can change. There are models that prefer not to care about the 2D information and use a flattened image of shape (784,). Models that will use convolutional layers will often shape the input data as (28,28,1), an image with one channel. (Usually, images have 3 channels, RGB).

\n
\n

Arguments to the Input

\n

The code for the Input method is defined here (December, 22 - 2017)

\n

Possible arguments:

\n\n", "system": "" }, { "instruction": "Test score vs test accuracy when evaluating model using Keras", "input": "", "output": "

For reference, the two relevant parts of the code:

\n
model.compile(loss='binary_crossentropy',\n                  optimizer='adam',\n                  metrics=['accuracy'])\n\nscore, acc = model.evaluate(x_test, y_test,\n                                batch_size=batch_size)\nprint('Test score:', score)\nprint('Test accuracy:', acc)\n
\n

Score is the evaluation of the loss function for a given input.

\n

Training a network is finding parameters that minimize a loss function (or cost function).

\n

The cost function here is the binary_crossentropy.

\n

For a target T and a network output O, the binary crossentropy can defined as: 1

\n
f(T,O) = -(T * log(O) + (1-T) * log(1-O))\n
\n

So the score you see is the evaluation of that.

\n

If you feed it a batch of inputs it will most likely return the mean loss. 2

\n

So yeah, if your model has lower loss (at test time), it should often have lower prediction error.

\n

1 See similar formula at BCELoss on PyTorch (Binary Cross Entropy = BCE)

\n

2 Note how the PyTorch BCE is reduced to a scalar using the default value of reduction ='mean' (i.e. average) by default

\n", "system": "" }, { "instruction": "facenet triplet loss with keras", "input": "", "output": "

What could have happened, other than the learning rate was simply too high, was that an unstable triplet selection strategy had been used, effectively. If, for example, you only use 'hard triplets' (triplets where the a-n distance is smaller than the a-p distance), your network weights might collapse all embeddings to a single point (making the loss always equal to margin (your _alpha), because all embedding distances are zero).

\n\n

This can be fixed by using other kinds of triplets as well (like 'semi-hard triplets' where a-p is smaller than a-n, but the distance between a-p and a-n is still smaller than margin). So maybe if you always checked for this... It is explained in more detail in this blog post: https://omoindrot.github.io/triplet-loss

\n", "system": "" }, { "instruction": "How to calculate the number of parameters of an LSTM network?", "input": "", "output": "

No - the number of parameters of a LSTM layer in Keras equals to:

\n\n
params = 4 * ((size_of_input + 1) * size_of_output + size_of_output^2)\n
\n\n

Additional 1 comes from bias terms. So n is size of input (increased by the bias term) and m is size of output of a LSTM layer.

\n\n

So finally :

\n\n
4 * (4097 * 256 + 256^2) = 4457472\n
\n", "system": "" }, { "instruction": "Can't import plot_model from keras.utils?", "input": "", "output": "

Try to import in the below format

\n
from keras.utils.vis_utils import plot_model\n
\n

This week I've had the same problem, with this import it works.

\n", "system": "" }, { "instruction": "'Dense' object has no attribute 'op'", "input": "", "output": "

You are missing (x) after your output layer. Try

\n
output = Dense(10 , activation = 'softmax')(x)\n
\n", "system": "" }, { "instruction": "Proper way to feed time-series data to stateful LSTM?", "input": "", "output": "

The answer is: depends on problem at hand. For your case of one-step prediction - yes, you can, but you don't have to. But whether you do or not will significantly impact learning.

\n\n
\n\n

Batch vs. sample mechanism (\"see AI\" = see \"additional info\" section)

\n\n

All models treat samples as independent examples; a batch of 32 samples is like feeding 1 sample at a time, 32 times (with differences - see AI). From model's perspective, data is split into the batch dimension, batch_shape[0], and the features dimensions, batch_shape[1:] - the two \"don't talk.\" The only relation between the two is via the gradient (see AI).

\n\n
\n\n

Overlap vs no-overlap batch

\n\n

Perhaps the best approach to understand it is information-based. I'll begin with timeseries binary classification, then tie it to prediction: suppose you have 10-minute EEG recordings, 240000 timesteps each. Task: seizure or non-seizure?

\n\n\n\n

Take 10 samples, shape (240000, 1). How to feed?

\n\n
    \n
  1. (10, 54000, 1), all samples included, slicing as sample[0:54000]; sample[54000:108000] ...
  2. \n
  3. (10, 54000, 1), all samples included, slicing as sample[0:54000]; sample[1:54001] ...
  4. \n
\n\n

Which of the two above do you take? If (2), your neural net will never confuse a seizure for a non-seizure for those 10 samples. But it'll also be clueless about any other sample. I.e., it will massively overfit, because the information it sees per iteration barely differs (1/54000 = 0.0019%) - so you're basically feeding it the same batch several times in a row. Now suppose (3):

\n\n
    \n
  1. (10, 54000, 1), all samples included, slicing as sample[0:54000]; sample[24000:81000] ...
  2. \n
\n\n

A lot more reasonable; now our windows have a 50% overlap, rather than 99.998%.

\n\n
\n\n

Prediction: overlap bad?

\n\n

If you are doing a one-step prediction, the information landscape is now changed:

\n\n\n\n

This dramatically changes your loss function, and what is 'good practice' for minimizing it:

\n\n\n\n
\n\n

What should I do?

\n\n

First, make sure you understand this entire post, as nothing here's really \"optional.\" Then, here's the key about overlap vs no-overlap, per batch:

\n\n
    \n
  1. One sample shifted: model learns to better predict one step ahead for each starting step - meaning: (1) LSTM's robust against initial cell state; (2) LSTM predicts well for any step ahead given X steps behind
  2. \n
  3. Many samples, shifted in later batch: model less likely to 'memorize' train set and overfit
  4. \n
\n\n

Your goal: balance the two; 1's main edge over 2 is:

\n\n\n\n

Should I ever use (2) in prediction?

\n\n\n\n
\n\n

LSTM stateful: may actually be entirely useless for your problem.

\n\n

Stateful is used when LSTM can't process the entire sequence at once, so it's \"split up\" - or when different gradients are desired from backpropagation. With former, the idea is - LSTM considers former sequence in its assessment of latter:

\n\n\n\n

In other words: do not overlap in stateful in separate batches. Same batch is OK, as again, independence - no \"state\" between the samples.

\n\n

When to use stateful: when LSTM benefits from considering previous batch in its assessment of the next. This can include one-step predictions, but only if you can't feed the entire seq at once:

\n\n\n\n
\n\n

When and how does LSTM \"pass states\" in stateful?

\n\n\n\n

Per above, you cannot do this:

\n\n
# sampleNM = sample N at timestep(s) M\nbatch1 = [sample10, sample20, sample30, sample40]\nbatch2 = [sample21, sample41, sample11, sample31]\n
\n\n

This implies 21 causally follows 10 - and will wreck training. Instead do:

\n\n
batch1 = [sample10, sample20, sample30, sample40]\nbatch2 = [sample11, sample21, sample31, sample41]\n
\n\n
\n\n

Batch vs. sample: additional info

\n\n

A \"batch\" is a set of samples - 1 or greater (assume always latter for this answer)\n. Three approaches to iterate over data: Batch Gradient Descent (entire dataset at once), Stochastic GD (one sample at a time), and Minibatch GD (in-between). (In practice, however, we call the last SGD also and only distinguish vs BGD - assume it so for this answer.) Differences:

\n\n\n\n
\n\n

BONUS DIAGRAMS:

\n\n

\n\n
\n\n

\n", "system": "" }, { "instruction": "Buffered data was truncated after reaching the output size limit", "input": "", "output": "

Even if RAM | GPU | DISK on colab is free, this error still comes because there is a limited memory for displaying output of a cell on colab. Assuming the memory limit is around 2Mb to 5Mb when we run many epochs(148+) during training, it tends to fill that memory and hence the output is truncated because there is no more memory left free to display the buffered epochs. However, the machine keeps running in the background and the output is processed but it is not displayed because of the buffered limit. You will still get your desired output.

\n\n

One solution is not to use verbose=1 (use 0 instead).

\n", "system": "" }, { "instruction": "How to get word vectors from Keras Embedding Layer", "input": "", "output": "

You can get the word embeddings by using the get_weights() method of the embedding layer (i.e. essentially the weights of an embedding layer are the embedding vectors):

\n\n
# if you have access to the embedding layer explicitly\nembeddings = emebdding_layer.get_weights()[0]\n\n# or access the embedding layer through the constructed model \n# first `0` refers to the position of embedding layer in the `model`\nembeddings = model.layers[0].get_weights()[0]\n\n# `embeddings` has a shape of (num_vocab, embedding_dim) \n\n# `word_to_index` is a mapping (i.e. dict) from words to their index, e.g. `love`: 69\nwords_embeddings = {w:embeddings[idx] for w, idx in word_to_index.items()}\n\n# now you can use it like this for example\nprint(words_embeddings['love'])  # possible output: [0.21, 0.56, ..., 0.65, 0.10]\n
\n", "system": "" }, { "instruction": "Keras: release memory after finish training process", "input": "", "output": "

Releasing RAM memory

\n\n

For releasing the RAM memory, just do del Variables as suggested by @nuric in the comment.

\n\n

Releasing GPU memory

\n\n

This is a little bit trickier than releasing the RAM memory. Some people will suggest you the following code (Assuming you are using keras)

\n\n
from keras import backend as K\nK.clear_session()\n
\n\n

However, the above code doesn't work for all people. (Even when you try del Models, it is still not going to work)

\n\n

If the above method doesn't work for you, then try the following (You need to install the numba library first):

\n\n
from numba import cuda\ncuda.select_device(0)\ncuda.close()\n
\n\n

The reason behind it is: Tensorflow is just allocating memory to the GPU, while CUDA is responsible for managing the GPU memory.

\n\n

If CUDA somehow refuses to release the GPU memory after you have cleared all the graph with K.clear_session(), then you can use the cuda library to have a direct control on CUDA to clear up GPU memory.

\n", "system": "" }, { "instruction": "Convolution2D + LSTM versus ConvLSTM2D", "input": "", "output": "\n\n

They are not exactly the same, here is why:

\n\n

1. Use Convolution2D layers and LSTM layers

\n\n

As it is known, Convolution2D serves well for capturing image or spatial features, whilst LSTM are used to detect correlations over time. However, by stacking these kind of layers, the correlation between space and time features may not be captured properly.

\n\n

2. Use ConvLSTM2D

\n\n

To solve this, Xingjian Shi et al. proposed a network structure able to capture spatiotemporal correlations, namely ConvLSTM. In Keras, this is reflected in the ConvLSTM2D class, which computes convolutional operations in both the input and the recurrent transformations.

\n\n

Keras code

\n\n

Too illustrate this, you can see here the LSTM code, if you go to the call method from LSTMCell, you'd only see:

\n\n
    x_i = K.dot(inputs_i, self.kernel_i)\n    x_f = K.dot(inputs_f, self.kernel_f)\n    x_c = K.dot(inputs_c, self.kernel_c)\n    x_o = K.dot(inputs_o, self.kernel_o)\n
\n\n

Instead, the ConvLSTM2DCell class calls:

\n\n
    x_i = self.input_conv(inputs_i, self.kernel_i, self.bias_i, padding=self.padding)\n    x_f = self.input_conv(inputs_f, self.kernel_f, self.bias_f, padding=self.padding)\n    x_c = self.input_conv(inputs_c, self.kernel_c, self.bias_c, padding=self.padding)\n    x_o = self.input_conv(inputs_o, self.kernel_o, self.bias_o, padding=self.padding)\n    h_i = self.recurrent_conv(h_tm1_i, self.recurrent_kernel_i)\n    h_f = self.recurrent_conv(h_tm1_f, self.recurrent_kernel_f)\n    h_c = self.recurrent_conv(h_tm1_c, self.recurrent_kernel_c)\n    h_o = self.recurrent_conv(h_tm1_o, self.recurrent_kernel_o)\n
\n\n

Where:

\n\n
def input_conv(self, x, w, b=None, padding='valid'):\n    conv_out = K.conv2d(x, w, strides=self.strides,\n                        padding=padding,\n                        data_format=self.data_format,\n                        dilation_rate=self.dilation_rate)\n    if b is not None:\n        conv_out = K.bias_add(conv_out, b,\n                              data_format=self.data_format)\n    return conv_out\n\ndef recurrent_conv(self, x, w):\n    conv_out = K.conv2d(x, w, strides=(1, 1),\n                        padding='same',\n                        data_format=self.data_format)\n    return conv_out\n
\n\n

In LSTM, the equivalent for h_x (recurrent transformations) would be:

\n\n
K.dot(h_tm1_x, self.recurrent_kernel_x)\n
\n\n

Instead of ConvLSTM2D's:

\n\n
self.recurrent_conv(h_tm1_x, self.recurrent_kernel_x)\n
\n\n

These kind of transformations could not be computed with stacked Conv2D and LSTM layers.

\n", "system": "" }, { "instruction": "Backward propagation in Keras?", "input": "", "output": "

You simply don't. (Late edit: except when you are creating custom training loops, only for advanced uses)

\n

Keras does backpropagation automatically. There's absolutely nothing you need to do for that except for training the model with one of the fit methods.

\n

You just need to take care of a few things:

\n\n

This is all you need to have the automatic backpropagation working properly.

\n

If your layers don't have trainable weights, you don't need custom layers, create Lambda layers instead (only calculations, no trainable weights).

\n", "system": "" }, { "instruction": "float16 vs float32 for convolutional neural networks", "input": "", "output": "

Surprisingly, it's totally OK to use 16 bits, even not just for fun, but in production as well. For example, in this video Jeff Dean talks about 16-bit calculations at Google, around 52:00. A quote from the slides:

\n\n
\n

Neural net training very tolerant of reduced precision

\n
\n\n

Since GPU memory is the main bottleneck in ML computation, there has been a lot of research on precision reduction. E.g.

\n\n\n\n

Of course, I can imagine some networks may require high precision for training, but I would recommend at least to try 16 bits for training a big network and switch to 32 bits if it proves to work worse.

\n", "system": "" }, { "instruction": "Keras: difference of InputLayer and Input", "input": "", "output": "\n\n

You can only call layers passing tensors to them.

\n\n

The idea is:

\n\n
outputTensor = SomeLayer(inputTensor)\n
\n\n

So, only Input can be passed because it's a tensor.

\n\n

Honestly, I have no idea about the reason for the existence of InputLayer. Maybe it's supposed to be used internally. I never used it, and it seems I'll never need it.

\n", "system": "" }, { "instruction": "Keras retrieve value of node before activation function", "input": "", "output": "

Since you're using get_value(), I'll assume that you're using Theano backend. To get the value of the node before the sigmoid activation, you can traverse the computation graph.

\n\n
\n

The graph can be traversed starting from outputs (the result of some computation) down to its inputs using the owner field.

\n
\n\n

In your case, what you want is the input x of the sigmoid activation op. The output of the sigmoid op is model.output. Putting these together, the variable x is model.output.owner.inputs[0].

\n\n

If you print out this value, you'll see Elemwise{add,no_inplace}.0, which is an element-wise addition op. It can be verified from the source code of Dense.call():

\n\n
def call(self, inputs):\n    output = K.dot(inputs, self.kernel)\n    if self.use_bias:\n        output = K.bias_add(output, self.bias)\n    if self.activation is not None:\n        output = self.activation(output)\n    return output\n
\n\n

The input to the activation function is the output of K.bias_add().

\n\n

With a small modification of your code, you can get the value of the node before activation:

\n\n
x = model.output.owner.inputs[0]\nfunc = K.function([model.input] + [K.learning_phase()], [x])\nprint func([test_input, 0.])\n
\n\n

For anyone using TensorFlow backend: use x = model.output.op.inputs[0] instead.

\n", "system": "" }, { "instruction": "Keras: how to get tensor dimensions inside custom loss?", "input": "", "output": "

Two things here:

\n\n
    \n
  1. If you want to get a tensor shape you should use int_shape function from keras.backend.
  2. \n
  3. The first dimension is set to be a batch dimension so int_shape(y_true)[0] will return you a batch size. You should use int_shape(y_true)[1].
  4. \n
\n", "system": "" }, { "instruction": "What is the relation between validation_data and validation_split in Keras' fit function?", "input": "", "output": "

No, everything is correct. One potential reason behind this separation is that sometimes people have training and validation data separately (in many academic datasets) and sometimes you have all the data and can split it anyway you want.

\n", "system": "" }, { "instruction": "How to have parallel convolutional layers in keras?", "input": "", "output": "

Here is an example of designing a network of parallel convolution and sub sampling layers in keras version 2. I hope this resolves your problem.

\n\n
rows, cols = 100, 15\ndef create_convnet(img_path='network_image.png'):\n    input_shape = Input(shape=(rows, cols, 1))\n\n    tower_1 = Conv2D(20, (100, 5), padding='same', activation='relu')(input_shape)\n    tower_1 = MaxPooling2D((1, 11), strides=(1, 1), padding='same')(tower_1)\n\n    tower_2 = Conv2D(20, (100, 7), padding='same', activation='relu')(input_shape)\n    tower_2 = MaxPooling2D((1, 9), strides=(1, 1), padding='same')(tower_2)\n\n    tower_3 = Conv2D(20, (100, 10), padding='same', activation='relu')(input_shape)\n    tower_3 = MaxPooling2D((1, 6), strides=(1, 1), padding='same')(tower_3)\n\n    merged = keras.layers.concatenate([tower_1, tower_2, tower_3], axis=1)\n    merged = Flatten()(merged)\n\n    out = Dense(200, activation='relu')(merged)\n    out = Dense(num_classes, activation='softmax')(out)\n\n    model = Model(input_shape, out)\n    plot_model(model, to_file=img_path)\n    return model\n
\n\n

The image of this network will look like \n\"enter

\n", "system": "" }, { "instruction": "Is Keras thread safe?", "input": "", "output": "

Yes, Keras is thread safe, if you pay a little attention to it.

\n\n

In fact, in reinforcement learning there is an algorithm called Asynchronous Advantage Actor Critics (A3C) where each agent relies on the same neural network to tell them what they should do in a given state. In other words, each thread calls model.predict concurrently as in your problem. An example implementation with Keras of it is here.

\n\n

You should, however, pay extra attention to this line if you looked into the code:\nmodel._make_predict_function() # have to initialize before threading

\n\n

This is never mentioned in the Keras docs, but its necessary to make it work concurrently. In short, _make_predict_function is a function that compiles the predict function. In multi thread setting, you have to manually call this function to compile predict in advance, otherwise the predict function will not be compiled until you run it the first time, which will be problematic when many threading calling it at once. You can see a detailed explanation here.

\n\n

I have not met any other issues with multi threading in Keras till now.

\n", "system": "" }, { "instruction": "How to pass a parameter to Scikit-Learn Keras model function", "input": "", "output": "

You can add an input_dim keyword argument to the KerasClassifier constructor:

\n\n
model = KerasClassifier(build_fn=create_model, input_dim=5, nb_epoch=150, batch_size=10, verbose=0)\n
\n", "system": "" }, { "instruction": "input dimensions to a one dimensional convolutional network in keras", "input": "", "output": "

The reason why it look like this is that Keras designer intended to make 1-dimensional convolutional framework to be interpreted as a framework to deal with sequences. To fully understand the difference - try to imagine that you have a sequence of a multiple feature vectors. Then your output will be at least two dimensional - where first dimension is connected with time and other dimensions are connected with features. 1-dimensional convolutional framework was designed to in some way bold this time dimension and try to find the reoccuring patterns in data - rather than performing a classical multidimensional convolutional transformation.

\n\n

In your case you must simply reshape your data to have shape (dataset_size, 101, 1) - because you have only one feature. It could be easly done using numpy.reshape function. To understand what does a new step mean - you must understand that you are doing the convolution over time - so you change the temporal structure of your data - which lead to new time-connected structure. In order to get your data to a format which is suitable for dense / static layers use keras.layers.flatten layer - the same as in classic convolutional case.

\n\n

UPDATE: As I mentioned before - the first dimension of input is connected with time. So the difference between (1, 101) and (101, 1) lies in that in first case you have one time step with 101 features and in second - 101 timesteps with 1 feature. The problem which you mentioned after your first change has its origin in making pooling with size 2 on such input. Having only one timestep - you cannot pool any value on a time window of size 2 - simply because there is not enough timesteps to do that.

\n", "system": "" }, { "instruction": "What is the parameter "max_q_size" used for in "model.fit_generator"?", "input": "", "output": "

This simply defines the maximum size of the internal training queue which is used to \"precache\" your samples from generator. It is used during generation of the the queues

\n\n
def generator_queue(generator, max_q_size=10,\n                    wait_time=0.05, nb_worker=1):\n    '''Builds a threading queue out of a data generator.\n    Used in `fit_generator`, `evaluate_generator`, `predict_generator`.\n    '''\n    q = queue.Queue()\n    _stop = threading.Event()\n\n    def data_generator_task():\n        while not _stop.is_set():\n            try:\n                if q.qsize() < max_q_size:\n                    try:\n                        generator_output = next(generator)\n                    except ValueError:\n                        continue\n                    q.put(generator_output)\n                else:\n                    time.sleep(wait_time)\n            except Exception:\n                _stop.set()\n                raise\n\n    generator_threads = [threading.Thread(target=data_generator_task)\n                         for _ in range(nb_worker)]\n\n    for thread in generator_threads:\n        thread.daemon = True\n        thread.start()\n\n    return q, _stop\n
\n\n

In other words you have a thread filling the queue up to given, maximum capacity directly from your generator, while (for example) training routine consumes its elements (and sometimes waits for the completion)

\n\n
 while samples_seen < samples_per_epoch:\n     generator_output = None\n     while not _stop.is_set():\n         if not data_gen_queue.empty():\n             generator_output = data_gen_queue.get()\n             break\n         else:\n             time.sleep(wait_time)\n
\n\n

and why default of 10? No particular reason, like most of the defaults - it simply makes sense, but you could use different values too.

\n\n

Construction like this suggests, that authors thought about expensive data generators, which might take time to execture. For example consider downloading data over a network in generator call - then it makes sense to precache some next batches, and download next ones in parallel for the sake of efficiency and to be robust to network errors etc.

\n", "system": "" }, { "instruction": "How can I stop Keras from printing after calling model.predict", "input": "", "output": "

As mentioned by Gerry P, to prevent Keras from printing the output of model.predict(), set the verbose argument to 0 as follows:

\n
agent.model.predict(np.array([0,0,0,0]).reshape(1,4),verbose = 0)\n
\n

Reference: Keras documentarion.

\n", "system": "" }, { "instruction": "ValueError: Unknown layer: Functional", "input": "", "output": "

The solution to this error is very simple, ex. the reason is that you have trained the model on version '2.3.0' of Tensorflow & '2.4.3' of Keras (On Colab or local). and now you are accessing the saved model(.h5) via another version of Keras & TensorFlow. It will give you the error. The solution is that re-trained model with upgraded versions or downgrades your TF&Keras to the same version as on which model is trained.

\n", "system": "" }, { "instruction": "Saving Keras models with Custom Layers", "input": "", "output": "

Correction number 1 is to use Custom_Objects while loading the Saved Model i.e., replace the code,

\n\n
new_model = tf.keras.models.load_model('model.h5') \n
\n\n

with

\n\n
new_model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer})\n
\n\n

Since we are using Custom Layers to build the Model and before Saving it, we should use Custom Objects while Loading it.

\n\n

Correction number 2 is to add **kwargs in the __init__ function of the Custom Layer like

\n\n
def __init__(self, k, name=None, **kwargs):\n        super(CustomLayer, self).__init__(name=name)\n        self.k = k\n        super(CustomLayer, self).__init__(**kwargs)\n
\n\n

Complete working code is shown below:

\n\n
import tensorflow as tf\n\nclass CustomLayer(tf.keras.layers.Layer):\n    def __init__(self, k, name=None, **kwargs):\n        super(CustomLayer, self).__init__(name=name)\n        self.k = k\n        super(CustomLayer, self).__init__(**kwargs)\n\n\n    def get_config(self):\n        config = super(CustomLayer, self).get_config()\n        config.update({\"k\": self.k})\n        return config\n\n    def call(self, input):\n        return tf.multiply(input, 2)\n\nmodel = tf.keras.models.Sequential([\n    tf.keras.Input(name='input_layer', shape=(10,)),\n    CustomLayer(10, name='custom_layer'),\n    tf.keras.layers.Dense(1, activation='sigmoid', name='output_layer')\n])\ntf.keras.models.save_model(model, 'model.h5')\nnew_model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer})\n\nprint(new_model.summary())\n
\n\n

Output of the above code is shown below:

\n\n
WARNING:tensorflow:No training configuration found in the save file, so the model was *not* compiled. Compile it manually.\nModel: \"sequential_1\"\n_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\ncustom_layer_1 (CustomLayer) (None, 10)                0         \n_________________________________________________________________\noutput_layer (Dense)         (None, 1)                 11        \n=================================================================\nTotal params: 11\nTrainable params: 11\nNon-trainable params: 0\n
\n\n

Hope this helps. Happy Learning!

\n", "system": "" }, { "instruction": "tensorflow:Can save best model only with val_acc available, skipping", "input": "", "output": "

I know how frustrating these things can be sometimes..but tensorflow requires that you explicitly write out the name of metric you are wanting to calculate

\n

You will need to actually say 'val_accuracy'

\n
metric = 'val_accuracy'\nModelCheckpoint(filepath=r"C:\\Users\\reda.elhail\\Desktop\\checkpoints\\{}".format(Name), monitor=metric,\n                    verbose=2, save_best_only=True, mode='max')]\n
\n

Hope this helps =)

\n

*** As later noted by BlueTurtle (Give their answer a thumbs up please, likely still beneath this) you also need to use the full metric name to match your model.compile, ModelCheckpoint, and EarlyStopping.

\n", "system": "" }, { "instruction": "Custom loss function in Keras based on the input data", "input": "", "output": "

I have come across 2 solutions to the question you asked.

\n
    \n
  1. You can pass your input (scalar only) as an argument to the custom loss wrapper function.
  2. \n
\n
    def custom_loss(i):\n\n        def loss(y_true, y_pred):\n            return K.mean(K.square(y_pred - y_true), axis=-1) + something with i...\n        return loss\n\n    def baseline_model():\n        # create model\n        i = Input(shape=(5,))\n        x = Dense(5, kernel_initializer='glorot_uniform', activation='linear')(i)\n        o = Dense(1, kernel_initializer='normal', activation='linear')(x)\n        model = Model(i, o)\n        model.compile(loss=custom_loss(i), optimizer=Adam(lr=0.0005))\n        return model\n
\n

This solution is also mentioned in the accepted answer here

\n
    \n
  1. You can pad your label with extra data columns from input and write a custom loss. This is helpful if you just want one/few feature column(s) from your input.
  2. \n
\n
    def custom_loss(data, y_pred):\n\n        y_true = data[:, 0]\n        i = data[:, 1]\n        return K.mean(K.square(y_pred - y_true), axis=-1) + something with i...\n\n\n    def baseline_model():\n        # create model\n        i = Input(shape=(5,))\n        x = Dense(5, kernel_initializer='glorot_uniform', activation='linear')(i)\n        o = Dense(1, kernel_initializer='normal', activation='linear')(x)\n        model = Model(i, o)\n        model.compile(loss=custom_loss, optimizer=Adam(lr=0.0005))\n        return model\n\n\n    model.fit(X, np.append(Y_true, X[:, 0], axis =1), batch_size = batch_size, epochs=90, shuffle=True, verbose=1)\n
\n

This solution can be found also here in this thread.

\n

I have only used the 2nd method when I had to use input feature columns in the loss. The first method can be only used with scalar arguments as mentioned in the comments.

\n", "system": "" }, { "instruction": "Read only mode in keras", "input": "", "output": "

I had a similar issue and solved this way

\n\n

store the graph\\architecture in JSON format and weights in h5 format

\n\n
import json\n\n# lets assume `model` is main model \nmodel_json = model.to_json()\nwith open(\"model_in_json.json\", \"w\") as json_file:\n    json.dump(model_json, json_file)\n\nmodel.save_weights(\"model_weights.h5\")\n
\n\n

then need to load model first to create graph\\architecture and load_weights in model

\n\n
from keras.models import load_model\nfrom keras.models import model_from_json\nimport json\n\nwith open('model_in_json.json','r') as f:\n    model_json = json.load(f)\n\nmodel = model_from_json(model_json)\nmodel.load_weights('model_weights.h5')\n
\n", "system": "" }, { "instruction": "How to iterate through tensors in custom loss function?", "input": "", "output": "

As usual, don't loop. There are severe performance drawbacks and also bugs. Use only backend functions unless totally unavoidable (usually it's not unavoidable)

\n\n
\n\n

Solution for example 3:

\n\n

So, there is a very weird thing there...

\n\n
\n

Do you really want to simply ignore half of your model's predictions? (Example 3)

\n
\n\n

Assuming this is true, just duplicate your tensor in the last dimension, flatten and discard half of it. You have the exact effect you want.

\n\n
def custom_loss(true, pred):\n    n = K.shape(pred)[0:1]\n\n    pred = K.concatenate([pred]*2, axis=-1) #duplicate in the last axis\n    pred = K.flatten(pred)                  #flatten \n    pred = K.slice(pred,                    #take only half (= n samples)\n                   K.constant([0], dtype=\"int32\"), \n                   n) \n\n    return K.abs(true - pred)\n
\n\n

Solution for your loss function:

\n\n

If you have sorted times from greater to lower, just do a cumulative sum.

\n\n
\n

Warning: If you have one time per sample, you cannot train with mini-batches!!!
\n batch_size = len(labels)

\n
\n\n

It makes sense to have time in an additional dimension (many times per sample), as is done in recurrent and 1D conv netoworks. Anyway, considering your example as expressed, that is shape (samples_equal_times,) for yTime:

\n\n
def neg_log_likelihood(yTrue,yPred):\n    yStatus = yTrue[:,0]\n    yTime = yTrue[:,1]    \n    n = K.shape(yTrue)[0]    \n\n\n    #sort the times and everything else from greater to lower:\n    #obs, you can have the data sorted already and avoid doing it here for performance\n\n    #important, yTime will be sorted in the last dimension, make sure its (None,) in this case\n    # or that it's (None, time_length) in the case of many times per sample\n    sortedTime, sortedIndices = tf.math.top_k(yTime, n, True)    \n    sortedStatus = K.gather(yStatus, sortedIndices)\n    sortedPreds = K.gather(yPred, sortedIndices)\n\n    #do the calculations\n    exp = K.exp(sortedPreds)\n    sums = K.cumsum(exp)  #this will have the sum for j >= i in the loop\n    logsums = K.log(sums)\n\n    return K.sum(sortedStatus * sortedPreds - logsums)\n
\n", "system": "" }, { "instruction": "How to pickle Keras model?", "input": "", "output": "

As of now, Keras models are pickle-able. But we still recommend using model.save() to save model to disk.

\n", "system": "" }, { "instruction": "How to fix AttributeError: module 'numpy' has no attribute 'square'", "input": "", "output": "

I removed numpy.py then updated my numpy and it worked!

\n", "system": "" }, { "instruction": "Cross Validation in Keras", "input": "", "output": "
\n

If my goal is to fine-tune the network for the entire dataset

\n
\n\n

It is not clear what you mean by \"fine-tune\", or even what exactly is your purpose for performing cross-validation (CV); in general, CV serves one of the following purposes:

\n\n\n\n

Since you don't define any search grid for hyperparameter selection in your code, it would seem that you are using CV in order to get the expected performance of your model (error, accuracy etc).

\n\n

Anyway, for whatever reason you are using CV, the first snippet is the correct one; your second snippet

\n\n
model = None\nmodel = create_model()\nfor train, test in kFold.split(X, Y):\n    train_evaluate(model, X[train], Y[train], X[test], Y[test])\n
\n\n

will train your model sequentially over the different partitions (i.e. train on partition #1, then continue training on partition #2 etc), which essentially is just training on your whole data set, and it is certainly not cross-validation...

\n\n

That said, a final step after the CV which is often only implied (and frequently missed by beginners) is that, after you are satisfied with your chosen hyperparameters and/or model performance as given by your CV procedure, you go back and train again your model, this time with the entire available data.

\n", "system": "" }, { "instruction": "Difference between Conv2D and Convolution2D in Keras", "input": "", "output": "

From the keras source code, they're the same:
\n(The source code changes from time to time and the line number in the link above might eventually be wrong)

\n\n
# Aliases\n\nConvolution1D = Conv1D\nConvolution2D = Conv2D\nConvolution3D = Conv3D\nSeparableConvolution2D = SeparableConv2D\nConvolution2DTranspose = Conv2DTranspose\nDeconvolution2D = Deconv2D = Conv2DTranspose\nDeconvolution3D = Deconv3D = Conv3DTranspose\n
\n", "system": "" }, { "instruction": "Preload whole dataset on gpu for training Keras model", "input": "", "output": "

You don't have to load the whole data. You can ingest the data piece by piece using the DataSet class.

\n

Tensorflow can take care of loading more data while your gpu is crunching your numbers. You can follow the below steps.

\n
    \n
  1. Convert your dataset into a TFRecord dataset and save it to your disk.
  2. \n
  3. Load this dataset using the TFRecordDataset class
  4. \n
  5. Ingest it into your Kerasmodel.
  6. \n
\n

You can check the example listed here.

\n

Hope this is helpful.

\n", "system": "" }, { "instruction": "Keras custom loss function: Accessing current input pattern", "input": "", "output": "

You can wrap the loss function as a inner function and pass your input tensor to it (as commonly done when passing additional arguments to the loss function).

\n\n
def custom_loss_wrapper(input_tensor):\n    def custom_loss(y_true, y_pred):\n        return K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor)\n    return custom_loss\n\ninput_tensor = Input(shape=(10,))\nhidden = Dense(100, activation='relu')(input_tensor)\nout = Dense(1, activation='sigmoid')(hidden)\nmodel = Model(input_tensor, out)\nmodel.compile(loss=custom_loss_wrapper(input_tensor), optimizer='adam')\n
\n\n

You can verify that input_tensor and the loss value (mostly, the K.mean(input_tensor) part) will change as different X is passed to the model.

\n\n
X = np.random.rand(1000, 10)\ny = np.random.randint(2, size=1000)\nmodel.test_on_batch(X, y)  # => 1.1974642\n\nX *= 1000\nmodel.test_on_batch(X, y)  # => 511.15466\n
\n", "system": "" }, { "instruction": "How to use predict_generator with ImageDataGenerator?", "input": "", "output": "

You can change the value of batch_size in flow_from_directory from default value (which is batch_size=32 ) to batch_size=1. Then set the steps of predict_generator to the total number of your test images. Something like this:

\n\n
test_datagen = ImageDataGenerator(rescale=1./255)\n\ntest_generator = test_datagen.flow_from_directory(\n        test_dir,\n        target_size=(200, 200),\n        color_mode=\"rgb\",\n        shuffle = False,\n        class_mode='categorical',\n        batch_size=1)\n\nfilenames = test_generator.filenames\nnb_samples = len(filenames)\n\npredict = model.predict_generator(test_generator,steps = nb_samples)\n
\n", "system": "" }, { "instruction": "How to use ModelCheckpoint with custom metrics in Keras?", "input": "", "output": "

Yes, it is possible.

\n\n

Define the custom metrics as described in the documentation:

\n\n
import keras.backend as K\n\ndef mean_pred(y_true, y_pred):\n    return K.mean(y_pred)\n\nmodel.compile(optimizer='rmsprop',\n              loss='binary_crossentropy',\n              metrics=['accuracy', mean_pred])\n
\n\n

To check all available metrics:

\n\n
print(model.metrics_names)\n> ['loss', 'acc', 'mean_pred']\n
\n\n

Pass the metric name to ModelCheckpoint through monitor. If you want the metric calculated in the validation, use the val_ prefix.

\n\n
ModelCheckpoint(weights.{epoch:02d}-{val_mean_pred:.2f}.hdf5,\n                monitor='val_mean_pred',\n                save_best_only=True,\n                save_weights_only=True,\n                mode='max',\n                period=1)\n
\n\n

Don't use mode='auto' for custom metrics. Understand why here.

\n\n
\n\n

Why am I answering my own question? Check this.

\n", "system": "" }, { "instruction": "Keras RNN with LSTM cells for predicting multiple output time series based on multiple intput time series", "input": "", "output": "

Initial note. If time series were short (for example T = 30), we wouldn't need stateful LSTM and classic LSTM would work well.\nIn OP question, time series lengths are T=3000, so learning can be very slow with classic LSTM. Learning will be improved by cutting the time series into pieces and using stateful LSTM.

\n\n

Stateful mode with N=batch_size.\nStateful models are tricky with Keras, because you need to be careful on how you cut time series and select batch size. In OP question, sample size is N=100. Since we can accept to train model with batch of one hundred (it is not a large number), we will select batch_size=100.

\n\n

Selecting batch_size=N simplifies the training because you do not need to reset states inside epochs (so no need to write a callback on_batch_begin).

\n\n

It remains the question of cutting the time series. Cutting is a little technical, so I wrote a wrapper function working in all cases.

\n\n
def stateful_cut(arr, batch_size, T_after_cut):\n    if len(arr.shape) != 3:\n        # N: Independent sample size,\n        # T: Time length,\n        # m: Dimension\n        print(\"ERROR: please format arr as a (N, T, m) array.\")\n\n    N = arr.shape[0]\n    T = arr.shape[1]\n\n    # We need T_after_cut * nb_cuts = T\n    nb_cuts = int(T / T_after_cut)\n    if nb_cuts * T_after_cut != T:\n        print(\"ERROR: T_after_cut must divide T\")\n\n    # We need batch_size * nb_reset = N\n    # If nb_reset = 1, we only reset after the whole epoch, so no need to reset\n    nb_reset = int(N / batch_size)\n    if nb_reset * batch_size != N:\n        print(\"ERROR: batch_size must divide N\")\n\n    # Cutting (technical)\n    cut1 = np.split(arr, nb_reset, axis=0)\n    cut2 = [np.split(x, nb_cuts, axis=1) for x in cut1]\n    cut3 = [np.concatenate(x) for x in cut2]\n    cut4 = np.concatenate(cut3)\n    return(cut4)\n
\n\n

From now, it become easy to train the model. Since the OP example is very simple, we do not need additional preprocessing or regularization. I describe how to proceed step by step (for the impatient, whole self-contained code is available at the very end of this post).

\n\n

First we load data and reshape it with wrapper function.

\n\n
import numpy as np\nfrom keras.models import Sequential\nfrom keras.layers import Dense, LSTM, TimeDistributed\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpatches\n\n##\n# Data\n##\nN = X_train.shape[0] # size of samples\nT = X_train.shape[1] # length of each time series\nbatch_size = N # number of time series considered together: batch_size | N\nT_after_cut = 100 # length of each cut part of the time series: T_after_cut | T\ndim_in = X_train.shape[2] # dimension of input time series\ndim_out = y_train.shape[2] # dimension of output time series\n\ninputs, outputs, inputs_test, outputs_test = \\\n  [stateful_cut(arr, batch_size, T_after_cut) for arr in \\\n  [X_train, y_train, X_test, y_test]]\n
\n\n

Then we compile a model with 4 inputs, 3 outputs, and 1 hidden layer containing 10 nodes.

\n\n
##\n# Model\n##\nnb_units = 10\n\nmodel = Sequential()\nmodel.add(LSTM(batch_input_shape=(batch_size, None, dim_in),\n               return_sequences=True, units=nb_units, stateful=True))\nmodel.add(TimeDistributed(Dense(activation='linear', units=dim_out)))\nmodel.compile(loss = 'mse', optimizer = 'rmsprop')\n
\n\n

We train the model without resetting states. We can do it only because we have selected batch_size = N.

\n\n
##\n# Training\n##\nepochs = 100\n\nnb_reset = int(N / batch_size)\nif nb_reset > 1:\n    print(\"ERROR: We need to reset states when batch_size < N\")\n\n# When nb_reset = 1, we do not need to reinitialize states\nhistory = model.fit(inputs, outputs, epochs = epochs, \n                    batch_size = batch_size, shuffle=False,\n                    validation_data=(inputs_test, outputs_test))\n
\n\n

We get evolution of training/test loss as follows:

\n\n

\"training

\n\n

Now, we define a 'mime model' which is stateless but containing our stateful weights. [Why like this? Prediction with stateful model through model.predict needs a complete batch in Keras, but we may not have a complete batch to predict...]

\n\n
## Mime model which is stateless but containing stateful weights\nmodel_stateless = Sequential()\nmodel_stateless.add(LSTM(input_shape=(None, dim_in),\n               return_sequences=True, units=nb_units))\nmodel_stateless.add(TimeDistributed(Dense(activation='linear', units=dim_out)))\nmodel_stateless.compile(loss = 'mse', optimizer = 'rmsprop')\nmodel_stateless.set_weights(model.get_weights())\n
\n\n

Finally, we can show our incredible predictions on our long time series y1, y2 and y3 (blue for true output ; orange for predicted outputs):

\n\n

For y1:\n\"Prediction

\n\n

For y2:\n\"Prediction

\n\n

For y3:\n\"Prediction

\n\n

Conclusion: It works almost perfectly, unless for the 2-3 first dates where the series is unpredictable by definition. We do not observe any burst when going from one batch for the next batch.

\n\n

Much more When N is large, we would like to select batch_size | N with batch_size < N. I have written full code in https://github.com/ahstat/deep-learning/blob/master/rnn/4_lagging_and_stateful.py (Part C and D). This github path also shows efficiency of classic LSTM for short time series (Part A), and inefficiency for long time series (Part B). I've written a blog post detailing how to use Keras for time series predictions here: https://ahstat.github.io/RNN-Keras-time-series/ .

\n\n

Complete self-contained code

\n\n
################\n# Code from OP #\n################\nimport numpy as np\ndef random_sample(len_timeseries=3000):\n    Nchoice = 600\n    x1 = np.cos(np.arange(0,len_timeseries)/float(1.0 + np.random.choice(Nchoice)))\n    x2 = np.cos(np.arange(0,len_timeseries)/float(1.0 + np.random.choice(Nchoice)))\n    x3 = np.sin(np.arange(0,len_timeseries)/float(1.0 + np.random.choice(Nchoice)))\n    x4 = np.sin(np.arange(0,len_timeseries)/float(1.0 + np.random.choice(Nchoice)))\n    y1 = np.random.random(len_timeseries)\n    y2 = np.random.random(len_timeseries)\n    y3 = np.random.random(len_timeseries)\n    for t in range(3,len_timeseries):\n        ## the output time series depend on input as follows: \n        y1[t] = x1[t-2] \n        y2[t] = x2[t-1]*x3[t-2]\n        y3[t] = x4[t-3]\n    y = np.array([y1,y2,y3]).T\n    X = np.array([x1,x2,x3,x4]).T\n    return y, X\ndef generate_data(Nsequence = 1000):\n    X_train = []\n    y_train = []\n    for isequence in range(Nsequence):\n        y, X = random_sample()\n        X_train.append(X)\n        y_train.append(y)\n    return np.array(X_train),np.array(y_train)\n\nNsequence = 100\nprop = 0.5\nNtrain = int(Nsequence*prop)\nX, y = generate_data(Nsequence)\nX_train = X[:Ntrain,:,:]\nX_test  = X[Ntrain:,:,:]\ny_train = y[:Ntrain,:,:]\ny_test  = y[Ntrain:,:,:] \n\n#X.shape = (N sequence, length of time series, N input features)\n#y.shape = (N sequence, length of time series, N targets)\nprint(X.shape, y.shape)\n# (100, 3000, 4) (100, 3000, 3)\n\n####################\n# Cutting function #\n####################\ndef stateful_cut(arr, batch_size, T_after_cut):\n    if len(arr.shape) != 3:\n        # N: Independent sample size,\n        # T: Time length,\n        # m: Dimension\n        print(\"ERROR: please format arr as a (N, T, m) array.\")\n\n    N = arr.shape[0]\n    T = arr.shape[1]\n\n    # We need T_after_cut * nb_cuts = T\n    nb_cuts = int(T / T_after_cut)\n    if nb_cuts * T_after_cut != T:\n        print(\"ERROR: T_after_cut must divide T\")\n\n    # We need batch_size * nb_reset = N\n    # If nb_reset = 1, we only reset after the whole epoch, so no need to reset\n    nb_reset = int(N / batch_size)\n    if nb_reset * batch_size != N:\n        print(\"ERROR: batch_size must divide N\")\n\n    # Cutting (technical)\n    cut1 = np.split(arr, nb_reset, axis=0)\n    cut2 = [np.split(x, nb_cuts, axis=1) for x in cut1]\n    cut3 = [np.concatenate(x) for x in cut2]\n    cut4 = np.concatenate(cut3)\n    return(cut4)\n\n#############\n# Main code #\n#############\nfrom keras.models import Sequential\nfrom keras.layers import Dense, LSTM, TimeDistributed\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpatches\n\n##\n# Data\n##\nN = X_train.shape[0] # size of samples\nT = X_train.shape[1] # length of each time series\nbatch_size = N # number of time series considered together: batch_size | N\nT_after_cut = 100 # length of each cut part of the time series: T_after_cut | T\ndim_in = X_train.shape[2] # dimension of input time series\ndim_out = y_train.shape[2] # dimension of output time series\n\ninputs, outputs, inputs_test, outputs_test = \\\n  [stateful_cut(arr, batch_size, T_after_cut) for arr in \\\n  [X_train, y_train, X_test, y_test]]\n\n##\n# Model\n##\nnb_units = 10\n\nmodel = Sequential()\nmodel.add(LSTM(batch_input_shape=(batch_size, None, dim_in),\n               return_sequences=True, units=nb_units, stateful=True))\nmodel.add(TimeDistributed(Dense(activation='linear', units=dim_out)))\nmodel.compile(loss = 'mse', optimizer = 'rmsprop')\n\n##\n# Training\n##\nepochs = 100\n\nnb_reset = int(N / batch_size)\nif nb_reset > 1:\n    print(\"ERROR: We need to reset states when batch_size < N\")\n\n# When nb_reset = 1, we do not need to reinitialize states\nhistory = model.fit(inputs, outputs, epochs = epochs, \n                    batch_size = batch_size, shuffle=False,\n                    validation_data=(inputs_test, outputs_test))\n\ndef plotting(history):\n    plt.plot(history.history['loss'], color = \"red\")\n    plt.plot(history.history['val_loss'], color = \"blue\")\n    red_patch = mpatches.Patch(color='red', label='Training')\n    blue_patch = mpatches.Patch(color='blue', label='Test')\n    plt.legend(handles=[red_patch, blue_patch])\n    plt.xlabel('Epochs')\n    plt.ylabel('MSE loss')\n    plt.show()\n\nplt.figure(figsize=(10,8))\nplotting(history) # Evolution of training/test loss\n\n##\n# Visual checking for a time series\n##\n## Mime model which is stateless but containing stateful weights\nmodel_stateless = Sequential()\nmodel_stateless.add(LSTM(input_shape=(None, dim_in),\n               return_sequences=True, units=nb_units))\nmodel_stateless.add(TimeDistributed(Dense(activation='linear', units=dim_out)))\nmodel_stateless.compile(loss = 'mse', optimizer = 'rmsprop')\nmodel_stateless.set_weights(model.get_weights())\n\n## Prediction of a new set\ni = 0 # time series selected (between 0 and N-1)\nx = X_train[i]\ny = y_train[i]\ny_hat = model_stateless.predict(np.array([x]))[0]\n\nfor dim in range(3): # dim = 0 for y1 ; dim = 1 for y2 ; dim = 2 for y3.\n    plt.figure(figsize=(10,8))\n    plt.plot(range(T), y[:,dim])\n    plt.plot(range(T), y_hat[:,dim])\n    plt.show()\n\n## Conclusion: works almost perfectly.\n
\n", "system": "" }, { "instruction": "Using Keras ImageDataGenerator in a regression model", "input": "", "output": "

With Keras 2.2.4 you can use flow_from_dataframe which solves what you want to do, allowing you to flow images from a directory for regression problems. You should store all your images in a folder and load a dataframe containing in one column the image IDs and in the other column the regression score (labels) and set class_mode='other' in flow_from_dataframe.

\n

Here you can find an example where the images are in image_dir, the dataframe with the image IDs and the regression scores is loaded with pandas from the "train file"

\n
train_label_df = pd.read_csv(train_file, delimiter=' ', header=None, names=['id', 'score'])\n\ntrain_datagen = ImageDataGenerator(rescale = 1./255, horizontal_flip = True,\n                                   fill_mode = "nearest", zoom_range = 0.2,\n                                   width_shift_range = 0.2, height_shift_range=0.2,\n                                   rotation_range=30) \n\ntrain_generator = train_datagen.flow_from_dataframe(dataframe=train_label_df, directory=image_dir, \n                                              x_col="id", y_col="score", has_ext=True, \n                                              class_mode="other", target_size=(img_width, img_height), \n                                              batch_size=bs)\n
\n", "system": "" }, { "instruction": "Policy Gradients in Keras", "input": "", "output": "

TL;DR

\n\n
    \n
  1. Learn how to implement custom loss functions and gradients using Keras.backend. You will need it for more advanced algorithms and it's actually much easier once you get the hang of it
  2. \n
  3. One CartPole example of using keras.backend could be https://gist.github.com/kkweon/c8d1caabaf7b43317bc8825c226045d2 (though its backend used Tensorflow but it should be very similar if not the same)
  4. \n
\n\n

Problem

\n\n

When playing,

\n\n

the agent needs a policy that is basically a function that maps a state into a policy that is a probability for each action. So, the agent will choose an action according to its policy.

\n\n

i.e, policy = f(state)

\n\n

When training,

\n\n

Policy Gradient does not have a loss function. Instead, it tries to maximize the expected return of rewards. And, we need to compute the gradients of log(action_prob) * advantage

\n\n
    \n
  1. advantage is a function of rewards. \n\n
  2. \n
  3. action_prob is a function of states and action_taken. For example, we need to know which action we took so that we can update parameters to increase/decrease a probability for the action we took.\n\n
  4. \n
\n\n

I'm assuming something like this

\n\n\n\n

Summary

\n\n

We need two functions

\n\n\n\n

You already know it's not easy to implement like typical classification problems where you can just model.compile(...) -> model.fit(X, y)

\n\n

However,

\n\n\n\n

Plus, there are many many kinds of policy gradients.

\n\n\n\n

References

\n\n\n", "system": "" }, { "instruction": "How to create a Image Dataset just like MNIST dataset?", "input": "", "output": "

You can either write a function that loads all your images and stack them into a numpy array if all fits in RAM or use Keras ImageDataGenerator (https://keras.io/preprocessing/image/) which includes a function flow_from_directory. You can find an example here https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d.

\n", "system": "" }, { "instruction": "Cannot import name 'tf_utils' when using importing keras", "input": "", "output": "

Seems like it was a problem with keras 2.3.0, I installed keras 2.1.5 using pip and it works fine.

\n", "system": "" }, { "instruction": "How to set parameters in keras to be non-trainable?", "input": "", "output": "

You can simple assign a boolean value to the layer property trainable.

\n\n
model.layers[n].trainable = False\n
\n\n

You can visualize which layer is trainable:

\n\n
for l in model.layers:\n    print(l.name, l.trainable)\n
\n\n

You can pass it by the model definition too:

\n\n
frozen_layer = Dense(32, trainable=False)\n
\n\n

From Keras documentation:

\n\n
\n

To \"freeze\" a layer means to exclude it from training, i.e. its\n weights will never be updated. This is useful in the context of\n fine-tuning a model, or using fixed embeddings for a text input.
\n You can pass a trainable argument (boolean) to a layer constructor to\n set a layer to be non-trainable.\n Additionally, you can set the trainable property of a layer to True or\n False after instantiation. For this to take effect, you will need to\n call compile() on your model after modifying the trainable property.

\n
\n", "system": "" }, { "instruction": "ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [8, 28, 28]", "input": "", "output": "

The input layers of the model you created needs a 4 dimension tensor to work with but the x_train tensor you are passing to it has only 3 dimensions

\n

This means that you have to reshape your training set with .reshape(n_images, 286, 384, 1). Now you have added an extra dimension without changing the data and your model is ready to run.

\n

you need to reshape your x_train tensor to a 4 dimension before training your model.\nfor example:

\n
x_train = x_train.reshape(-1, 28, 28, 1)\n
\n

for more info on keras inputs Check this answer

\n", "system": "" }, { "instruction": "What does `training=True` mean when calling a TensorFlow Keras model?", "input": "", "output": "

Some neural network layers behave differently during training and inference, for example Dropout and BatchNormalization layers. For example

\n\n\n\n

The training argument lets the layer know which of the two \"paths\" it should take. If you set this incorrectly, your network might not behave as expected.

\n", "system": "" }, { "instruction": "Difference between model(x) and model.predict(x) in Keras?", "input": "", "output": "

Keras with tensorflow backend was using underlying tensorflow objects, but mostly was providing high level outputs which could be understood outside the tensorflow environment (as an example it could output numpy arrays or python lists).
Today given a model in tensorflow 2.0 (built using the keras library),

\n\n
out_np = model.predict(x)\n
\n\n

provides a numpy array which can, as an example, be printed with print(out_np).\n
On the other hand,

\n\n
out_tf = model(x)\n
\n\n

results into a tensorflow object, wich can be converted to a numpy array with .numpy()\n
The two results are equivalent, as an example, we have that the following is True,

\n\n
out_np.max() == out_tf.numpy().max()\n
\n\n

The format may be different, but the meaning of model(x) and model.predict(x) is the same:
given an input x, it is the value of the output nodes of a network characterized by its structure, weights and biases.

\n", "system": "" }, { "instruction": "Keras Conv2D: filters vs kernel_size", "input": "", "output": "

Each convolution layer consists of several convolution channels (aka. depth or filters). In practice, they are a number such as 64, 128, 256, 512 etc. This is equal to number of channels in the output of a convolutional layer. kernel_size, on the other hand, is the size of these convolution filters. In practice, they take values such as 3x3 or 1x1 or 5x5. To abbreviate, they can be written as 1 or 3 or 5 as they are mostly square in practice.

\n\n

Edit

\n\n

Following quote should make it more clear.

\n\n

Discussion on vlfeat

\n\n

Suppose X is an input with size W x H x D x N (where N is the size of the batch) to a convolutional layer containing filter F (with size FW x FH x FD x K) in a network.

\n\n

The number of feature channels D is the third dimension of the input X here (for example, this is typically 3 at the first input to the network if the input consists of colour images).\nThe number of filters K is the fourth dimension of F.\nThe two concepts are closely linked because if the number of filters in a layer is K, it produces an output with K feature channels. So the input to the next layer will have K feature channels.

\n\n

The FW x FH above is filter size you are looking for.

\n\n

Added

\n\n

You should be familiar with filters. You can consider each filter to be responsible for extracting some type of feature from a raw image. The CNNs try to learn such filters i.e. the filters parametrized in CNNs are learned during training of CNNs. You apply each filter in a Conv2D to each input channel and combine these to get output channels. So, the number of filters and the number of output channels are the same.

\n", "system": "" }, { "instruction": "How to get accuracy of model using keras?", "input": "", "output": "

You probably didn't add \"acc\" as a metric when compiling the model.

\n\n
model.compile(optimizer=..., loss=..., metrics=['accuracy',...])\n
\n\n

You can get the metrics and loss from any data without training again with:

\n\n
model.evaluate(X, Y)\n
\n", "system": "" }, { "instruction": "How to match cv2.imread to the keras image.img_load output", "input": "", "output": "

OpenCV reads images in BGR format whereas in keras, it is represented in RGB. To get the OpenCV version to correspond to the order we expect (RGB), simply reverse the channels:

\n\n
test_image = cv2.imread('trick.png')\ntest_image = cv2.resize(test_image, (64, 64))\ntest_image = test_image[...,::-1] # Added\n
\n\n

The last line reverses the channels to be in RGB order. You can then feed this into your keras model.

\n\n

Another point I'd like to add is that cv2.imread usually reads in images in uint8 precision. Examining the output of your keras loaded image, you can see that the data is in floating point precision so you may also want to convert to a floating-point representation, such as float32:

\n\n
import numpy as np\n# ...\n# ...\ntest_image = test_image[...,::-1].astype(np.float32)\n
\n\n

As a final point, depending on how you trained your model it's usually customary to normalize the image pixel values to a [0,1] range. If you did this with your keras model, make sure you divide your values by 255 in your image read in through OpenCV:

\n\n
import numpy as np\n# ...\n# ...\ntest_image = (test_image[...,::-1].astype(np.float32)) / 255.0\n
\n", "system": "" }, { "instruction": "Keras: How is Accuracy Calculated for Multi-Label Classification?", "input": "", "output": "

For multi-label classification, I think it is correct to use sigmoid as the activation and binary_crossentropy as the loss.

\n\n

If the output is sparse multi-label, meaning a few positive labels and a majority are negative labels, the Keras accuracy metric will be overflatted by the correctly predicted negative labels. If I remember correctly, Keras does not choose the label with the highest probability. Instead, for binary classification, the threshold is 50%. So the prediction would be [0, 0, 0, 0, 0, 1]. And if the actual labels were [0, 0, 0, 0, 0, 0], the accuracy would be 5/6. You can test this hypothesis by creating a model that always predicts negative label and look at the accuracy.

\n\n

If that's indeed the case, you may try a different metric such as top_k_categorical_accuracy.

\n\n

Another remote possibility I can think of is your training data. Are the labels y somehow \"leaked\" into x? Just a wild guess.

\n", "system": "" }, { "instruction": "classification metrics can't handle a mix of continuous-multioutput and multi-label-indicator targets", "input": "", "output": "
y_pred = (y_pred > 0.5) \n
\n\n

Outputs a boolean matrix. The problem is that it has the same shape as it had before, but when you evaluate accuracy you need a vector of labels.

\n\n

To do this take np.argmax(y_pred, axis=1) instead to output correct labels.

\n", "system": "" }, { "instruction": "Keras, append to logs from callback", "input": "", "output": "

You can insert your additional metrics into the dictionary logs.

\n\n\n\n
from keras.callbacks import Callback\n\nclass ComputeMetrics(Callback):\n    def on_epoch_end(self, epoch, logs):\n        logs['val_metric'] = epoch ** 2  # replace it with your metrics\n        if (epoch + 1) % 10 == 0:\n            logs['test_metric'] = epoch ** 3  # same\n        else:\n            logs['test_metric'] = np.nan\n
\n\n

Just remember to place this callback before CSVLogger in your fit call. Callbacks that appear later in the list would receive a modified version of logs. For example,

\n\n
model = Sequential([Dense(1, input_shape=(10,))])\nmodel.compile(loss='mse', optimizer='adam')\nmodel.fit(np.random.rand(100, 10),\n          np.random.rand(100),\n          epochs=30,\n          validation_data=(np.random.rand(100, 10), np.random.rand(100)),\n          callbacks=[ComputeMetrics(), CSVLogger('1.log')])\n
\n\n

Now if you take a look at the output log file, you'll see two additional columns test_metric and val_metric:

\n\n
epoch,loss,test_metric,val_loss,val_metric\n0,0.547923130989,nan,0.370979120433,0\n1,0.525437340736,nan,0.35585285902,1\n2,0.501358469725,nan,0.341958616376,4\n3,0.479624577463,nan,0.329370084703,9\n4,0.460121934414,nan,0.317930338383,16\n5,0.440655426979,nan,0.307486981452,25\n6,0.422990380526,nan,0.298160370588,36\n7,0.406809270382,nan,0.289906248748,49\n8,0.3912438941,nan,0.282540213466,64\n9,0.377326357365,729,0.276457450986,81\n10,0.364721306562,nan,0.271435074806,100\n11,0.353612961769,nan,0.266939682364,121\n12,0.343238875866,nan,0.263228923082,144\n13,0.333940329552,nan,0.260326927304,169\n14,0.325931007862,nan,0.25773427248,196\n15,0.317790198028,nan,0.255648627281,225\n16,0.310636150837,nan,0.25411529541,256\n17,0.304091459513,nan,0.252928718328,289\n18,0.298703012466,nan,0.252127869725,324\n19,0.292693507671,6859,0.251701972485,361\n20,0.287824733257,nan,0.251610517502,400\n21,0.283586999774,nan,0.251790778637,441\n22,0.27927801609,nan,0.252100949883,484\n23,0.276239238977,nan,0.252632959485,529\n24,0.273072380424,nan,0.253150621653,576\n25,0.270296501517,nan,0.253555388451,625\n26,0.268056542277,nan,0.254015884399,676\n27,0.266158599854,nan,0.254496408701,729\n28,0.264166412354,nan,0.254723013639,784\n29,0.262506003976,24389,0.255338237286,841\n
\n", "system": "" }, { "instruction": "Using keras tokenizer for new words not in training set", "input": "", "output": "

In Keras Tokenizer you have the oov_token parameter. Just select your token and unknown words will have that one.

\n\n
tokenizer_a = Tokenizer(oov_token=1)\ntokenizer_b = Tokenizer()\ntokenizer_a.fit_on_texts([\"Hello world\"])\ntokenizer_b.fit_on_texts([\"Hello world\"])\n
\n\n

Outputs

\n\n
In [26]: tokenizer_a.texts_to_sequences([\"Hello cruel world\"])\nOut[26]: [[2, 1, 3]]\n\nIn [27]: tokenizer_b.texts_to_sequences([\"Hello cruel world\"])\nOut[27]: [[1, 2]]\n
\n", "system": "" }, { "instruction": "Predicting a multiple forward time step of a time series using LSTM", "input": "", "output": "

Sharing the same concerns about having too little data, you can do that like this.

\n\n

First, it's a good idea to keep your values between -1 and +1, so I'd normalize them first.

\n\n

For the LSTM model, you must make sure you're using return_sequences=True.
\nThere is nothing \"wrong\" with your model, but it may need more or less layers or units to achieve what you desire. (There is no clear answer to this, though).

\n\n

Training the model to predict the next step:

\n\n

All you need is to pass Y as a shifted X:

\n\n
entireData = arrayWithShape((samples,52,1))\nX = entireData[:,:-1,:]\ny = entireData[:,1:,:]\n
\n\n

Train the model using these.

\n\n

Predicting the future:

\n\n

Now, for predicting the future, since we need to use predicted elements as input for more predicted elements, we are going to use a loop and make the model stateful=True.

\n\n

Create a model equal to the previous one, with these changes:

\n\n\n\n

Copy the weights of the previously trained model:

\n\n
newModel.set_weights(oldModel.get_weights())\n
\n\n

Predict only one sample at a time and never forget to call model.reset_states() before starting any sequence.

\n\n

First predict with the sequence you already know (this will make sure the model prepares its states properly for predicting the future)

\n\n
model.reset_states()\npredictions = model.predict(entireData)\n
\n\n

By the way we trained, the last step in predictions will be the first future element:

\n\n
futureElement = predictions[:,-1:,:]\n\nfutureElements = []\nfutureElements.append(futureElement)\n
\n\n

Now we make a loop where this element is the input. (Because of stateful, the model will understand it's a new input step of the previous sequence instead of a new sequence)

\n\n
for i in range(howManyPredictions):\n    futureElement = model.predict(futureElement)\n    futureElements.append(futureElement)\n
\n\n
\n\n

This link contains a complete example predicting the future of two features: https://github.com/danmoller/TestRepo/blob/master/TestBookLSTM.ipynb

\n", "system": "" }, { "instruction": "Tree-LSTM in Keras", "input": "", "output": "

You can implement a tree-LSTM in Keras using the Subclassing API. This will allow you to define your own custom layers and models by subclassing the tf.keras.layers.Layer and tf.keras.Model classes, respectively.

\n

To implement a tree-LSTM in the Subclassing API, you will need to define a custom layer that takes a tree-structured input and applies the LSTM operation to each node in the tree. Here is some pseudocode that outlines the steps you can follow:

\n
class TreeLSTMLayer(tf.keras.layers.Layer):\n  def __init__(self, units, **kwargs):\n    super(TreeLSTMLayer, self).__init__(**kwargs)\n    self.units = units\n\n  def build(self, input_shape):\n    # Define the weight matrices and biases for the LSTM operation\n    # (e.g., self.W_i, self.W_f, self.W_o, self.W_c, self.b_i, etc.)\n    # based on the number of units in the layer\n    # (e.g., input_dim = units, output_dim = units)\n    # and the input shape of the tree (i.e., input_shape[0])\n\n  def call(self, inputs):\n    # Unpack the inputs into the tree structure and the initial states\n    # (e.g., tree, h_0, c_0 = inputs)\n\n    # Initialize a list to store the output states for each node in the tree\n    output_states = []\n\n    # Recursively traverse the tree and apply the LSTM operation\n    # at each node, updating the hidden and cell states as you go\n    # (e.g., h_t, c_t = lstm(x_t, h_t-1, c_t-1))\n    def traverse_tree(node, h_t, c_t):\n      # Apply the LSTM operation to the current node\n      # (e.g., i_t, f_t, o_t, g_t = lstm(x_t, h_t, c_t))\n      # Update the hidden and cell states\n      # (e.g., c_t = f_t * c_t + i_t * g_t, h_t = o_t * tf.tanh(c_t))\n      output_states.append((h_t, c_t))\n      # Recursively traverse the children of the current node\n      for child in node.children:\n        traverse_tree(child, h_t, c_t)\n\n    # Start the recursive traversal at the root of the tree\n    traverse_tree(tree.root, h_0, c_0)\n\n    # Return the output states for each node in the tree\n    return output_states\n
\n

Once you have defined your custom TreeLSTMLayer, you can use it to build a tree-LSTM model by subclassing the tf.keras.Model class and using the TreeLSTMLayer as one of the layers in your model.

\n", "system": "" }, { "instruction": "Need To Compile Keras Model Before `model.evaluate()`", "input": "", "output": "

Because evaluate will calculate the loss function and the metrics.

\n

You don't have any of them until you compile the model. They're parameters to the compile method:

\n
model.compile(optimizer=..., loss=..., metrics=...) \n
\n

On the other hand, predict doesn't evaluate any metric or loss, it just passes the input data through the model and gets its output.

\n

You need the "loss" for training too, so you can't train without compiling. And you can compile a model as many times as you want, and even change the parameters.

\n
\n

The outputs and the loss function:

\n

The model's outputs depend on it being defined with weights. That is automatic and you can predict from any model, even without any training. Every model in Keras is already born with weights (either initialized by you or randomly initialized)

\n

You input something, the model calculates the output. At the end of everything, this is all that matters. A good model has proper weights and outputs things correctly.

\n

But before getting to that end, your model needs to be trained.

\n

Now, the loss function takes the current output and compares it with the expected/true result. It's a function supposed to be minimized. The less the loss, the closer your results are to the expected. This is the function from which the derivatives will be taken so the backpropagation algorithm can update the weights.

\n

The loss function is not useful for the final purpose of the model, but it's necessary for training. That's probably why you can have models without loss functions (and consequently, there is no way to evaluate them).

\n", "system": "" }, { "instruction": "Explain with example: how embedding layers in keras works", "input": "", "output": "

Embedding layer creates embedding vectors out of the input words (I myself still don't understand the math) similarly like word2vec or pre-calculated glove would do.

\n

Before I get to your code, let's make a short example.

\n
texts = ['This is a text', 'This is not a text']\n
\n

First we turn these sentences into a vector of integers where each word is a number assigned to the word in the dictionary and order of the vector creates the sequence of the words.

\n
from keras.preprocessing.text import Tokenizer\nfrom keras.preprocessing.sequence import pad_sequences \nfrom keras.utils import to_categorical\n\nmax_review_length = 6  # maximum length of the sentence\nembedding_vector_length = 3\ntop_words = 10\n\n# num_words is the number of unique words in the sequence, if there's more top count words are taken\ntokenizer = Tokenizer(top_words)\ntokenizer.fit_on_texts(texts)\nsequences = tokenizer.texts_to_sequences(texts)\nword_index = tokenizer.word_index\ninput_dim = len(word_index) + 1\nprint('Found %s unique tokens.' % len(word_index))\n\n# max_review_length is the maximum length of the input text so that we can create vector [... 0,0,1,3,50] where 1,3,50 are individual words\ndata = pad_sequences(sequences, max_review_length)\n\nprint('Shape of data tensor:', data.shape)\nprint(data)\n\n[Out:] \n'This is a text' --> [0 0 1 2 3 4]\n'This is not a text' --> [0 1 2 5 3 4]\n
\n

Now you can input these into the embedding layer.

\n
from keras.models import Sequential\nfrom keras.layers import Embedding\n\nmodel = Sequential()\nmodel.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length, mask_zero=True))\nmodel.compile(optimizer='adam', loss='categorical_crossentropy')\noutput_array = model.predict(data)\n
\n

output_array contains array of size (2, 6, 3): 2 input reviews or sentences in my case, 6 is the maximum number of words in each review (max_review_length) and 3 is embedding_vector_length.\nE.g.

\n
array([[[-0.01494285, -0.007915  ,  0.01764857],\n    [-0.01494285, -0.007915  ,  0.01764857],\n    [-0.03019481, -0.02910612,  0.03518577],\n    [-0.0046863 ,  0.04763055, -0.02629668],\n    [ 0.02297204,  0.02146662,  0.03114786],\n    [ 0.01634104,  0.02296363, -0.02348827]],\n\n   [[-0.01494285, -0.007915  ,  0.01764857],\n    [-0.03019481, -0.02910612,  0.03518577],\n    [-0.0046863 ,  0.04763055, -0.02629668],\n    [-0.01736645, -0.03719328,  0.02757809],\n    [ 0.02297204,  0.02146662,  0.03114786],\n    [ 0.01634104,  0.02296363, -0.02348827]]], dtype=float32)\n
\n

In your case you have a list of 5000 words, which can create review of maximum 500 words (more will be trimmed) and turn each of these 500 words into vector of size 32.

\n

You can get mapping between the word indexes and embedding vectors by running:

\n
model.layers[0].get_weights()\n
\n

In the case below top_words was 10, so we have mapping of 10 words and you can see that mapping for 0, 1, 2, 3, 4 and 5 is equal to output_array above.

\n
[array([[-0.01494285, -0.007915  ,  0.01764857],\n    [-0.03019481, -0.02910612,  0.03518577],\n    [-0.0046863 ,  0.04763055, -0.02629668],\n    [ 0.02297204,  0.02146662,  0.03114786],\n    [ 0.01634104,  0.02296363, -0.02348827],\n    [-0.01736645, -0.03719328,  0.02757809],\n    [ 0.0100757 , -0.03956784,  0.03794377],\n    [-0.02672029, -0.00879055, -0.039394  ],\n    [-0.00949502, -0.02805768, -0.04179233],\n    [ 0.0180716 ,  0.03622523,  0.02232374]], dtype=float32)]\n
\n

As mentioned in: https://stats.stackexchange.com/questions/270546/how-does-keras-embedding-layer-work these vectors are initiated as random and optimized by the network optimizers just like any other parameter of the network.

\n", "system": "" }, { "instruction": "Can someone explain to me the difference between activation and recurrent activation arguments passed in initialising keras lstm layer?", "input": "", "output": "

On code

\n\n

Line from 1932

\n\n
i = self.recurrent_activation(z0)\nf = self.recurrent_activation(z1)\nc = f * c_tm1 + i * self.activation(z2)\no = self.recurrent_activation(z3)\nh = o * self.activation(c)\n
\n\n

recurrent_activation is for activate input/forget/output gate.

\n\n

activation if for cell state and hidden state.

\n", "system": "" }, { "instruction": "Keras: use Tensorboard with train_on_batch()", "input": "", "output": "

A possible way to create the TensorBoard callback, and drive it manually:

\n\n
# This example shows how to use keras TensorBoard callback\n# with model.train_on_batch\n\nimport tensorflow.keras as keras\n\n# Setup the model\nmodel = keras.models.Sequential()\nmodel.add(...) # Add your layers\nmodel.compile(...) # Compile as usual\n\nbatch_size=256\n\n# Create the TensorBoard callback,\n# which we will drive manually\ntensorboard = keras.callbacks.TensorBoard(\n  log_dir='/tmp/my_tf_logs',\n  histogram_freq=0,\n  batch_size=batch_size,\n  write_graph=True,\n  write_grads=True\n)\ntensorboard.set_model(model)\n\n# Transform train_on_batch return value\n# to dict expected by on_batch_end callback\ndef named_logs(model, logs):\n  result = {}\n  for l in zip(model.metrics_names, logs):\n    result[l[0]] = l[1]\n  return result\n\n# Run training batches, notify tensorboard at the end of each epoch\nfor batch_id in range(1000):\n  x_train,y_train = create_training_data(batch_size)\n  logs = model.train_on_batch(x_train, y_train)\n  tensorboard.on_epoch_end(batch_id, named_logs(model, logs))\n\ntensorboard.on_train_end(None)\n
\n", "system": "" }, { "instruction": "How can I use the Keras OCR example?", "input": "", "output": "

Well, I will try to answer everything you asked here:

\n\n

As commented in the OCR code, Keras doesn't support losses with multiple parameters, so it calculated the NN loss in a lambda layer. What does this mean in this case?

\n\n

The neural network may look confusing because it is using 4 inputs ([input_data, labels, input_length, label_length]) and loss_out as output. Besides input_data, everything else is information used only for calculating the loss, it means it is only used for training. We desire something like in line 468 of the original code:

\n\n
Model(inputs=input_data, outputs=y_pred).summary()\n
\n\n

which means \"I have an image as input, please tell me what is written here\". So how to achieve it?

\n\n

1) Keep the original training code as it is, do the training normally;

\n\n

2) After training, save this model Model(inputs=input_data, outputs=y_pred)in a .h5 file to be loaded wherever you want;

\n\n

3) Do the prediction: if you take a look at the code, the input image is inverted and translated, so you can use this code to make it easy:

\n\n
from scipy.misc import imread, imresize\n#use width and height from your neural network here.\n\ndef load_for_nn(img_file):\n    image = imread(img_file, flatten=True)\n    image = imresize(image,(height, width))\n    image = image.T\n\n    images = np.ones((1,width,height)) #change 1 to any number of images you want to predict, here I just want to predict one\n    images[0] = image\n    images = images[:,:,:,np.newaxis]\n    images /= 255\n\n    return images\n
\n\n

With the image loaded, let's do the prediction:

\n\n
def predict_image(image_path): #insert the path of your image \n    image = load_for_nn(image_path) #load from the snippet code\n    raw_word = model.predict(image) #do the prediction with the neural network\n    final_word = decode_output(raw_word)[0] #the output of our neural network is only numbers. Use decode_output from image_ocr.py to get the desirable string.\n    return final_word\n
\n\n

This should be enough. From my experience, the images used in the training are not good enough to make good predictions, I will release a code using other datasets that improved my results later if necessary.

\n\n

Answering related questions:

\n\n\n\n

It is a technique used to improve sequence classification. The original paper proves it improves results on discovering what is said in audio. In this case it is a sequence of characters. The explanation is a bit trick but you can find a good one here.

\n\n\n\n

I am not sure but you could take a look at Attention mechanism in neural networks. I don't have any good link now but I know it could be the case.

\n\n\n\n

OpenCV implements Maximally Stable Extremal Regions (known as MSER). I really like the results of this algorithm, it is fast and was good enough for me when I needed.

\n\n

As I said before, I will release a code soon. I will edit the question with the repository when I do, but I believe the information here is enough to get the example running.

\n", "system": "" }, { "instruction": "R keras package Error: Python module tensorflow.contrib.keras.python.keras was not found", "input": "", "output": "

I had a similar problem. Restart rstudio, load keras and tensorflow libraries, and type use_condaenv(\"r-tensorflow\"). That fixed it for me.

\n", "system": "" }, { "instruction": "How to make Keras use Tensorflow backend in Anaconda?", "input": "", "output": "

This happens because the keras conda-forge package puts a file in ${CONDA_PREFIX}/etc/conda/activate.d/keras_activate.sh, which sets the environment variable KERAS_BACKEND

\n\n
(root) [root@starlabs ~]# cat $CONDA_PREFIX/etc/conda/activate.d/keras_activate.sh\n#!/bin/bash\nif [ \"$(uname)\" == \"Darwin\" ]\nthen\n    # for Mac OSX\n    export KERAS_BACKEND=tensorflow\nelif [ \"$(uname)\" == \"Linux\" ]\nthen\n    # for Linux\n    export KERAS_BACKEND=theano\nfi\n
\n\n

As you can see from the file, in Linux, it sets the value to 'theano' and according to the official docs:

\n\n
\n

the environment variable KERAS_BACKEND will override what is\n defined in your config file

\n
\n\n

To work around this, you can either edit this file and change 'theano' to 'tensorflow' (which would probably get overwritten on reinstall or on changing environments) or, do the following:

\n\n
export KERAS_BACKEND=tensorflow\npython /path/to/python/program.py\n
\n", "system": "" }, { "instruction": "Running Keras model for prediction in multiple threads", "input": "", "output": "

multi threading in python doesn't necessarily make a better use of your resources since python uses global interpreter lock and only one native thread can run at a time.

\n

in python, usually you should use multi processing to utilize your resources, but since we're talking about keras models, I'm not sure even that is the right thing to do.\nloading several models in several processes has its own overhead, and you could simply increase the batch size as others have already pointed out.

\n

OR if you have a heavy pre-processing stage you could preprocess your data in one process and predict them in another (although I doubt that would be necessary either).

\n", "system": "" }, { "instruction": "How to manually specify class labels in keras flow_from_directory?", "input": "", "output": "

You could simply use the flow_from_directory and extend it to a multiclass in a following manner:

\n\n
def multiclass_flow_from_directory(flow_from_directory_gen, multiclasses_getter):\n    for x, y in flow_from_directory_gen:\n        yield x, multiclasses_getter(x, y)\n
\n\n

Where multiclasses_getter is assigning a multiclass vector / your multiclass representation to your images. Note that x and y are not a single examples but batches of examples, so this should be included in your multiclasses_getter design.

\n", "system": "" }, { "instruction": "How to map a function with additional parameter using the new Dataset api in TF1.3?", "input": "", "output": "

Here is an example using a lambda expression to wrap the function to which we want to pass an argument:

\n\n
import tensorflow as tf\ndef fun(x, arg):\n    return x * arg\n\nmy_arg = tf.constant(2, dtype=tf.int64)\nds = tf.data.Dataset.range(5)\nds = ds.map(lambda x: fun(x, my_arg))\n
\n\n

In the above, the signature of the function provided to map must match the contents of our dataset. So we have to write our lambda expression to match that. Here it is simple, as there is only one element contained in the dataset, the x that contains elements in the range from 0 to 4.

\n\n

If necessary, you can pass in an arbitrary number of external arguments from outside the dataset: ds = ds.map(lambda x: my_other_fun(x, arg1, arg2, arg3), and so on.

\n\n

To verify that the above works, we can observe that the mapping indeed multiplies each dataset element by two:

\n\n
iterator = ds.make_initializable_iterator()\nnext_x = iterator.get_next()\nwith tf.Session() as sess:\n    sess.run(iterator.initializer)\n\n    while True:\n      try:\n        print(sess.run(next_x))\n      except tf.errors.OutOfRangeError:\n        break\n
\n\n

The output:

\n\n
0\n2\n4\n6\n8\n
\n", "system": "" }, { "instruction": "Tensorflow: Can't understand ctc_beam_search_decoder() output sequence", "input": "", "output": "

As indicated in tf.nn.ctc_beam_search_decoder documentation, the shape of the output is not [batch_size, max_sequence_len]. Instead, it is

\n\n
[batch_size, max_decoded_length[j]]\n
\n\n

(with j=0 in your case).

\n\n

Based on the beginning of section 2 of this paper (which is cited in the github repository), max_decoded_length[0] is bounded from above by max_sequence_len, but they are not necessarily equal. The relevant citation is:

\n\n
\n

Let S be a set of training examples drawn from a fixed distribution\n D_{XxZ}. The input space X = (R^m) is the set of all sequences of m\n dimensional real valued vectors. The target space Z = L* is the set of\n all sequences over the (finite) alphabet L of labels. In general, we\n refer to elements of L* as label sequences or labellings. Each example\n in S consists of a pair of sequences (x, z). The target sequence z =\n (z1, z2, ..., zU) is at most as long as the input sequence x = (x1,\n x2, ..., xT ), i.e. U<=T. Since the input and target sequences are\n not generally the same length, there is no a priori way of aligning\n them.

\n
\n\n

In fact, max_decoded_length[0] depends on the specific matrix softmax_outputs. In particular, two such matrices with exactly the same dimensions can result in different max_decoded_length[0].

\n\n

For example, if you replace the row

\n\n
softmax_outputs = np.array([[[0.1, 0.1, 0.8], [0.8, 0.1, 0.1], [0.8, 0.1, 0.1], [0.8, 0.1, 0.1], [0.8, 0.1, 0.1]],\n                                [[0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7]],\n                                [[0.1, 0.7, 0.2], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7]],\n                                [[0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7]]])\n
\n\n

with the rows

\n\n
np.random.seed(7)\nr=np.random.randint(0,100,size=(4,5,3))\nsoftmax_outputs=r/np.sum(r,2).reshape(4,5,1)\n
\n\n

you'll get the output

\n\n
[[1 0 1]\n [1 0 1]\n [1 0 0]\n [1 0 0]]\n
\n\n

(in the above examples, softmax_outputs consists of logits and it is exactly of the same dimensions as the matrix you provided).

\n\n

On the other hand, changing the seed to np.random.seed(50) gives the output

\n\n
[[1 0]\n [1 0]\n [1 0]\n [0 1]]\n
\n\n

P.S.

\n\n

Regarding the last part of your question:

\n\n
\n

In this case I would expect the output to be similar to:

\n\n
[[2, 0, 0, 0, 0],\n [2, 2, 2, 2, 2],\n [1, 2, 2, 2, 2],\n [2, 2, 2, 2, 2]]\n
\n
\n\n

Note that, based on the documentation, num_classes actually represents num_labels + 1. Specifically:

\n\n
\n

The inputs Tensor's innermost dimension size, num_classes, represents\n num_labels + 1 classes, where num_labels is the number of true labels,\n and the largest value (num_classes - 1) is reserved for the blank\n label.

\n \n

For example, for a vocabulary containing 3 labels [a, b, c],\n num_classes = 4 and the labels indexing is {a: 0, b: 1, c: 2, blank:\n 3}.

\n
\n\n

So the true labels in your case are 0 and 1, and 2 is reserved for the blank label. The blank label represents the situation of observing no label (section 3.1 here):

\n\n
\n

A CTC network has a softmax output layer (Bridle, 1990) with one more\n unit than there are labels in L. The activations of the first |L|\n units are interpreted as the probabilities of observing the\n corresponding labels at particular times. The activation of the extra\n unit is the probability of observing a \u2018blank\u2019, or no label. Together,\n these outputs define the probabilities of all possible ways of\n aligning all possible label sequences with the input sequence.

\n
\n", "system": "" }, { "instruction": "How does TensorFlow's MultiRnnCell work?", "input": "", "output": "

Study this blog post as well as the provided implementation. It describes in detail how use MultiRNNCell to stack multiple RNN cells.

\n\n

\"enter

\n", "system": "" }, { "instruction": "ImportError: No module named datasets", "input": "", "output": "
pip install datasets\n
\n

I solved it this way.

\n", "system": "" }, { "instruction": "numpy random choice in Tensorflow", "input": "", "output": "

No, but you can achieve the same result using tf.multinomial:

\n\n
elems = tf.convert_to_tensor([1,2,3,5])\nsamples = tf.multinomial(tf.log([[1, 0, 0.3, 0.6]]), 1) # note log-prob\nelems[tf.cast(samples[0][0], tf.int32)].eval()\nOut: 1\nelems[tf.cast(samples[0][0], tf.int32)].eval()\nOut: 5\n
\n\n

The [0][0] part is here, as multinomial expects a row of unnormalized log-probabilities for each element of the batch and also has another dimension for the number of samples.

\n", "system": "" }, { "instruction": "Installing TensorFlow on Windows (Python 3.6.x)", "input": "", "output": "

Update 15.11.2017

\n\n

It seems that by now it is working like one would expect. Running the following commands using the following pip and python version should work.

\n\n
\n\n

Installing with Python 3.6.x

\n\n
\n\n

Version

\n\n
\n

Python: 3.6.3
\n pip: 9.0.1

\n
\n\n
\n\n

Installation Commands

\n\n

The following commands are based of the following installation guide here.

\n\n

using cmd

\n\n
C:> pip3 install --upgrade tensorflow // cpu\nC:> pip3 install --upgrade tensorflow-gpu // gpu\n
\n\n

using Anaconda

\n\n
C:> conda create -n tensorflow python=3.5 \nC:> activate tensorflow\n(tensorflow)C:> pip install --ignore-installed --upgrade tensorflow\n(tensorflow)C:> pip install --ignore-installed --upgrade tensorflow-gpu \n
\n\n

Additional Information

\nA list of common installation problems can be found here.

\n\n

You can find an example console output of a successful tensorflow cpu installation here.

\n\n
\n\n

Old response:

\n\n

Okay to conclude; use version 3.5.2 !
\nNeither 3.5.1 nor 3.6.x seem to work at the moment.

\n\n

Versions:

\n\n
\n

Python 3.5.2 pip 8.1.1 .. (python 3.5)

\n
\n\n

Commands:

\n\n
// cpu\nC:> pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl\n\n// gpu\nC:> pip install --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-0.12.0rc0-cp35-cp35m-win_amd64.whl\n
\n\n

\n", "system": "" }, { "instruction": "Install Cuda without root", "input": "", "output": "

Update The installation UI for 10.1 changed. The following works:

\n\n\n\n
\n\n

Thank you very much for the hints in the question! I just want to complete it with an approach that worked for me, also inspired in this gist and that hopefully helps in situations where a valid driver is installed, and installing a more recent CUDA on Linux without root permissions is still needed.

\n\n

TL;DR: Here are the steps to install CUDA9+CUDNN7 on Debian, and installing a pre-compiled version of TensorFlow1.4 on Python2.7 to test that everything works. Everything without root privileges and via terminal. Should also work for other CUDA, CUDNN, TensorFlow and Python versions on other Linux systems too.

\n\n
\n\n

INSTALLATION

\n\n
    \n
  1. Go to NVIDIA's official release web for CUDA (as for Nov. 2017, CUDA9 is out): https://developer.nvidia.com/cuda-downloads.

  2. \n
  3. Under your Linux distro, select the runfile (local)option. Note that the sudo indication present in the installation instructions is deceiving, since it is possible to run this installer without root permissions. On a server, one easy way is to copy the <LINK> of the Download button and, in any location of your home directory, run wget <LINK>. It will download the <INSTALLER> file.

  4. \n
  5. Run chmod +x <INSTALLER> to make it executable, and execute it ./<INSTALLER>.

  6. \n
  7. accept the EULA, say no to driver installation, and enter a <CUDA> location under your home directory to install the toolkit and a <CUDASAMPLES> for the samples.

  8. \n
  9. Not asked here but recommended: Download a compatible CUDNN file from the official web (you need to sign in). In my case, I downloaded the cudnn-9.0-linux-x64-v7.tgz, compatible with CUDA9 into the <CUDNN> folder. Uncompress it: tar -xzvf ....

  10. \n
  11. Optional: compile the samples. cd <CUDASAMPLES> && make. There are some very nice examples there and a very good starting point to write some CUDA scripts of yourself.

  12. \n
  13. (If you did 5.): Copy the required files from <CUDNN> into <CUDA>, and grant reading permission to user (not sure if needed):

  14. \n
\n\n
cp -P <CUDNN>/cuda/include/cudnn.h <CUDA>/include/\ncp -P <CUDNN>/cuda/lib64/libcudnn* <CUDA>/lib64\nchmod a+r <CUDA>/include/cudnn.h <CUDA>/lib64/libcudnn*\n
\n\n
    \n
  1. Add the library to your environment. This is typically done adding this following two lines to your ~/.bashrc file (in this example, the <CUDA> directory was ~/cuda9/:
  2. \n
\n\n
export PATH=<CUDA>/bin:$PATH\nexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<CUDA>/lib64/\n
\n\n
\n\n

FOR QUICK TESTING OR TENSORFLOW USERS

\n\n

The quickest way to get a TensorFlow compatible with CUDA9 and CUDNN7 (and a very quick way to test this) is to download a precompiled wheel file and install it with pip install <WHEEL>. Most of the versions you need, can be found in mind's repo (thanks a lot guys). A minimal test that confirms that CUDNN is also working involves the use of tf.nn.conv2d:

\n\n
import tensorflow as tf\nx = tf.nn.conv2d(tf.ones([1,1,10,1]), tf.ones([1,5,1,1]), strides=[1, 1, 1, 1], padding='SAME')\nwith tf.Session() as sess:\n    sess.run(x) # this should output a tensor of shape (1,1,10,1) with [3,4,5,5,5,5,5,5,4,3]\n
\n\n

In my case, the wheel I installed required Intel's MKL library, as explained here. Again, from terminal and without root users, this are the steps I followed to install the library and make TensorFlow find it (reference):

\n\n
    \n
  1. git clone https://github.com/01org/mkl-dnn.git
  2. \n
  3. cd mkl-dnn/scripts && ./prepare_mkl.sh && cd ..
  4. \n
  5. mkdir -p build && cd build
  6. \n
  7. cmake -D CMAKE_INSTALL_PREFIX:PATH=<TARGET_DIR_IN_HOME> ..
  8. \n
  9. make # this takes a while\n\n
      \n
    1. make doc # do this optionally if you have doxygen
    2. \n
  10. \n
  11. make test # also takes a while
  12. \n
  13. make install # installs into <TARGET_DIR_IN_HOME>
  14. \n
  15. add the following to your ~/.bashrc: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<TARGET_DIR_IN_HOME>/lib
  16. \n
\n\n
\n\n

Hope this helps!
\nAndres

\n", "system": "" }, { "instruction": "TensorFlow: Remember LSTM state for next batch (stateful LSTM)", "input": "", "output": "

I found out it was easiest to save the whole state for all layers in a placeholder.

\n\n
init_state = np.zeros((num_layers, 2, batch_size, state_size))\n\n...\n\nstate_placeholder = tf.placeholder(tf.float32, [num_layers, 2, batch_size, state_size])\n
\n\n

Then unpack it and create a tuple of LSTMStateTuples before using the native tensorflow RNN Api.

\n\n
l = tf.unpack(state_placeholder, axis=0)\nrnn_tuple_state = tuple(\n[tf.nn.rnn_cell.LSTMStateTuple(l[idx][0], l[idx][1])\n for idx in range(num_layers)]\n)\n
\n\n

RNN passes in the API:

\n\n
cell = tf.nn.rnn_cell.LSTMCell(state_size, state_is_tuple=True)\ncell = tf.nn.rnn_cell.MultiRNNCell([cell]*num_layers, state_is_tuple=True)\noutputs, state = tf.nn.dynamic_rnn(cell, x_input_batch, initial_state=rnn_tuple_state)\n
\n\n

The state - variable will then be feeded to the next batch as a placeholder.

\n", "system": "" }, { "instruction": "How does one initialize a variable with tf.get_variable and a numpy value in TensorFlow?", "input": "", "output": "

The following works, if you convert the constant NumPy array into a constant Tensor:

\n\n
init = tf.constant(np.random.rand(1, 2))\ntf.get_variable('var_name', initializer=init)\n
\n\n

The documentation for get_variable is a little lacking indeed. Just for your reference, the initializer argument has to be either a TensorFlow Tensor object (which can be constructed by calling tf.constant on a numpy value in your case), or a 'callable' that takes two arguments, shape and dtype, the shape and data type of the value that it's supposed to return. Again, in your case, you can write the following in case you wanted to use the 'callable' mechanism:

\n\n
init = lambda shape, dtype: np.random.rand(*shape)\ntf.get_variable('var_name', initializer=init, shape=[1, 2])\n
\n", "system": "" }, { "instruction": "Training on imbalanced data using TensorFlow", "input": "", "output": "

(1)It's ok to use your strategy. I'm working with imbalanced data as well, which I try to use down-sampling and up-sampling methods first to make the training set even distributed. Or using ensemble method to train each classifier with an even distributed subset.

\n\n

(2)I haven't seen any method to maximise the AUROC. My thought is that AUROC is based on true positive and false positive rate, which doesn't tell how well it works on each instance. Thus, it may not necessarily maximise the capability to separate the classes.

\n\n

(3)Regarding weighting the cost by the ratio of class instances, it similar to Loss function for class imbalanced binary classifier in Tensor flow\nand the answer.

\n", "system": "" }, { "instruction": "Could not load dynamic library 'libnvinfer.so.7'", "input": "", "output": "

For me the setting a symbolic link from libnvinfer version 7 to 8 worked:

\n
# the following path will be different for you - depending on your install method\n$ cd env/lib/python3.10/site-packages/tensorrt\n\n# create symbolic links\n$ ln -s libnvinfer_plugin.so.8 libnvinfer_plugin.so.7\n$ ln -s libnvinfer.so.8 libnvinfer.so.7\n\n# add tensorrt to library path\n$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/env/lib/python3.10/site-packages/tensorrt/\n
\n", "system": "" }, { "instruction": "NotImplementedError: Cannot convert a symbolic Tensor (lstm_2/strided_slice:0) to a numpy array. T", "input": "", "output": "

I solved with numpy downgrade to 1.18.5

\n
pip install -U numpy==1.18.5\n
\n", "system": "" }, { "instruction": "TensorFlow 2.1.0: has no attribute 'random_normal'", "input": "", "output": "

It was moved to tf.random.normal (along with all the other tf.random_* functions)

\n", "system": "" }, { "instruction": "Cannot run tflite model on GPU (Jetson Nano) using Python", "input": "", "output": "

TFLite doesn't support Nvidia GPUs as per this link

\n", "system": "" }, { "instruction": "How to handle non-determinism when training on a GPU?", "input": "", "output": "

TL;DR

\n\n

That, but much longer

\n

When you see neural network operations as mathematical operations, you would expect everything to be deterministic. Convolutions, activations, cross-entropy \u2013 everything here are mathematical equations and should be deterministic. Even pseudo-random operations such as shuffling, drop-out, noise and the likes, are entirely determined by a seed.

\n

When you see those operations from their computational implementation, on the other hand, you see them as massively parallelized computations, which can be source of randomness unless you are very careful.

\n

The heart of the problem is that, when you run operations on several parallel threads, you typically do not know which thread will end first. It is not important when threads operate on their own data, so for example, applying an activation function to a tensor should be deterministic. But when those threads need to synchronize, such as when you compute a sum, then the result may depend on the order of the summation, and in turn, on the order in which thread ended first.

\n

From there, you have broadly speaking two options:

\n\n

Which route takes CuDNN? Well, mostly the deterministic one. In recent releases, deterministic operations are the norm rather than the exception. But it used to offer many non-deterministic operations, and more importantly, it used to not offer some operations such as reduction, that people needed to implement themselves in CUDA with a variable degree of consideration to determinism.

\n

Some libraries such as theano were more ahead of this topic, by exposing early on a deterministic flag that the user could turn on or off \u2013 but as you can see from its description, it is far from offering any guarantee.

\n
\n

If more, sometimes we will select some implementations that are more deterministic, but slower. In particular, on the GPU, we will avoid using AtomicAdd. Sometimes we will still use non-deterministic implementation, e.g. when we do not have a GPU implementation that is deterministic. Also, see the dnn.conv.algo* flags to cover more cases.

\n
\n

In TensorFlow, the realization of the need for determinism has been rather late, but it's slowly getting there \u2013 helped by the advance of CuDNN on that front also. For a long time, reductions have been non-deterministic, but now they seem to be deterministic. The fact that CuDNN introduced deterministic reductions in version 6.0 may have helped of course.

\n

It seems that currently, the main obstacle for TensorFlow towards determinism is the backward pass of the convolution. It is indeed one of the few operations for which CuDNN proposes a non-deterministic algorithm, labeled CUDNN_CONVOLUTION_BWD_FILTER_ALGO_0. This algorithm is still in the list of possible choices for the backward filter in TensorFlow. And since the choice of the filter seems to be based on performance, it could indeed be picked if it is more efficient. (I am not so familiar with TensorFlow's C++ code so take this with a grain of salt.)

\n

Is this important?

\n

If you are debugging an issue, determinism is not merely important: it is mandatory. You need to reproduce the steps that led to a problem. This is currently a real issue with toolkits like TensorFlow. To mitigate this problem, your only option is to debug live, adding checks and breakpoints at the correct locations \u2013 not great.

\n

Deployment is another aspect of things, where it is often desirable to have a deterministic behavior, in part for human acceptance. While nobody would reasonably expect a medical diagnosis algorithm to never fail, it would be awkward that a computer could give the same patient a different diagnosis depending on the run. (Although doctors themselves are not immune to this kind of variability.)

\n

Those reasons are rightful motivations to fix non-determinism in neural networks.

\n

For all other aspects, I would say that we need to accept, if not embrace, the non-deterministic nature of neural net training. For all purposes, training is stochastic. We use stochastic gradient descent, shuffle data, use random initialization and dropout \u2013 and more importantly, training data is itself but a random sample of data. From that standpoint, the fact that computers can only generate pseudo-random numbers with a seed is an artifact. When you train, your loss is a value that also comes with a confidence interval due to this stochastic nature. Comparing those values to optimize hyper-parameters while ignoring those confidence intervals does not make much sense \u2013 therefore it is vain, in my opinion, to spend too much effort fixing non-determinism in that, and many other, cases.

\n", "system": "" }, { "instruction": "TensorFlow : failed call to cuInit: CUDA_ERROR_NO_DEVICE", "input": "", "output": "

The issue was solved on GitHub. This error message will be shown if you set an invalid value for the CUDA_VISIBLE_DEVICES environment variable, e.g. when you only have a single GPU (which has ID 0) and set CUDA_VISIBLE_DEVICES=1 or CUDA_VISIBLE_DEVICES=2.

\n", "system": "" }, { "instruction": "How to Suppress Tensorflow warning displayed in result", "input": "", "output": "

After searching hours together i found answer from Stackoverflow itself, where the answer is provided for different issue. And that solution worked here as well.

\n

Here is the solution for TF 1.x:

\n
tf.logging.set_verbosity(tf.logging.ERROR)\n
\n

For TF 2.x:

\n
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)\n
\n

Source:\nIs there a way to suppress the messages TensorFlow prints?

\n", "system": "" }, { "instruction": "How to improve accuracy of Tensorflow camera demo on iOS for retrained graph", "input": "", "output": "

Since you are not using YOLO Detector the MAINTAIN_ASPECT flag is set to false. Hence the image on Android app is not getting cropped, but it's scaled. However, in the code snippet provided I don't see the actual initialisation of the flag. Confirm that the value of the flag is actually false in your app.

\n\n

I know this isn't a complete solution but hope this helps you in debugging the issue.

\n", "system": "" }, { "instruction": "Very low GPU usage during training in Tensorflow", "input": "", "output": "

MNIST size networks are tiny and it's hard to achieve high GPU (or CPU) efficiency for them, I think 30% is not unusual for your application. You will get higher computational efficiency with larger batch size, meaning you can process more examples per second, but you will also get lower statistical efficiency, meaning you need to process more examples total to get to target accuracy. So it's a trade-off. For tiny character models like yours, the statistical efficiency drops off very quickly after a 100, so it's probably not worth trying to grow the batch size for training. For inference, you should use the largest batch size you can.

\n", "system": "" }, { "instruction": "TensorFlow Variables and Constants", "input": "", "output": "

In TensorFlow the differences between constants and variables are that when you declare some constant, its value can't be changed in the future (also the initialization should be with a value, not with operation).

\n\n

Nevertheless, when you declare a Variable, you can change its value in the future with tf.assign() method (and the initialization can be achieved with a value or operation).

\n\n

The function tf.global_variables_initializer() initialises all variables in your code with the value passed as parameter, but it works in async mode, so doesn't work properly when dependencies exists between variables.

\n\n

Your first code (#1) works properly because there is no dependencies on variable initialization and the constant is constructed with a value.

\n\n

The second code (#2) doesn't work because of the async behavior of tf.global_variables_initializer(). You can fix it using tf.variables_initializer() as follows:

\n\n
x = tf.Variable(35, name='x')\nmodel_x = tf.variables_initializer([x])\n\ny = tf.Variable(x + 5, name='y')\nmodel_y = tf.variables_initializer([y])\n\n\nwith tf.Session() as session:\n   session.run(model_x)\n   session.run(model_y)\n   print(session.run(y))\n
\n\n

The third code (#3) doesn't work properly because you are trying to initialize a constant with an operation, that isn't possible. To solve it, an appropriate strategy is (#1).

\n\n

Regarding to your last question. You need to run (a) session.run(model) when there are variables in your calculation graph (b) print(session.run(y)).

\n", "system": "" }, { "instruction": "Python / Tensorflow - Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304", "input": "", "output": "

Let's come to your original error:

\n\n
\n

Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304

\n
\n\n

This is because you adapt your code from a code with original input image size 24*24. The tensor shape after two convolution and two max-pooling layers is [-1, 6, 6, 64]. However, as your input image shape is 150*150, the intermediate shape becomes [-1, 38, 38, 64].

\n\n

try change w3

\n\n
\n

w3 = tf.Variable(tf.random_normal([38*38*64, 1024]))

\n
\n\n

You should always keep an eye on your tensor shape flow.

\n", "system": "" }, { "instruction": "Convert Tensorflow model to Caffe model", "input": "", "output": "

I've had the same problem and found a solution. The code can be found here (https://github.com/lFatality/tensorflow2caffe) and I've also documented the code in some Youtube videos.

\n\n
\n\n

Part 1 covers the creation of the architecture of VGG-19 in Caffe and tflearn (higher level API for TensorFlow, with some changes to the code native TensorFlow should also work).

\n\n
\n\n

In Part 2 the export of the weights and biases out of the TensorFlow model into a numpy file is described. In tflearn you can get the weights of a layer like this:

\n\n
#get parameters of a certain layer\nconv2d_vars = tflearn.variables.get_layer_variables_by_name(layer_name)\n#get weights out of the parameters\nweights = model.get_weights(conv2d_vars[0])\n#get biases out of the parameters\nbiases = model.get_weights(conv2d_vars[1])\n
\n\n

For a convolutional layer, the layer_name is Conv_2D. Fully-Connected layers are called FullyConnected. If you use more than one layer of a certain type, a raising integer with a preceding underscore is used (e.g. the 2nd conv layer is called Conv_2D_1). I've found these names in the graph of the TensorBoard. If you name the layers in your architecture definition, then these layer_names might change to the names you defined.

\n\n

In native TensorFlow the export will need different code but the format of the parameters should be the same so subsequent steps should still be applicable.

\n\n
\n\n

Part 3 covers the actual conversion. What's critical is the conversion of the weights when you create the caffemodel (the biases can be carried over without change). TensorFlow and Caffe use different formats when saving a filter. While TensorFlow uses [height, width, depth, number of filters] (TensorFlow docs, at the bottom), Caffe uses [number of filters, depth, height, width] (Caffe docs, chapter 'Blob storage and communication'). To convert between the formats you can use the transpose function (for example: weights_of_first_conv_layer.transpose((3,2,0,1)). The 3,2,0,1 sequence can be obtained by enumerating the TensorFlow format (origin) and then switching it to the Caffe format (target format) while keeping the numbers at their specific variable.).
\nIf you want to connect a tensor output to a fully-connected layer, things get a little tricky. If you use VGG-19 with an input size of 112x112 it looks like this.

\n\n
fc1_weights = data_file[16][0].reshape((4,4,512,4096))\nfc1_weights = fc1_w.transpose((3,2,0,1))\nfc1_weights = fc1_w.reshape((4096,8192))\n
\n\n

What you get from TensorFlow if you export the parameters at the connection between tensor and fully-connected layer is an array with the shape [entries in the tensor, units in the fc-layer] (here: [8192, 4096]). You have to find out what the shape of your output tensor is and then reshape the array so that it fits the TensorFlow format (see above, number of filters being the number of units in the fc-layer). After that you use the transpose-conversion you've used previously and then reshape the array again, but the other way around. While TensorFlow saves fc-layer weights as [number of inputs, number of outputs], Caffe does it the other way around.
\nIf you connect two fc-layers to each other, you don't have to do the complex process previously described but you will have to account for the different fc-layer format by transposing again (fc_layer_weights.transpose((1,0)))

\n\n

You can then set the parameters of the network using

\n\n
net.params['layer_name_in_prototxt'][0].data[...] = weights\nnet.params['layer_name_in_prototxt'][1].data[...] = biases\n
\n\n

This was a quick overview. If you want all the code, it's in my github repository. I hope it helps. :)

\n\n
\n\n

Cheers,
\nFatality

\n", "system": "" }, { "instruction": "How to train TensorFlow network using a generator to produce inputs?", "input": "", "output": "

Suppose you have a function that generates data:

\n\n\n\n
 def generator(data): \n    ...\n    yield (X, y)\n
\n\n

Now you need another function that describes your model architecture. It could be any function that processes X and has to predict y as output (say, neural network).

\n\n

Suppose your function accepts X and y as inputs, computes a prediction for y from X in some way and returns loss function (e.g. cross-entropy or MSE in the case of regression) between y and predicted y:

\n\n\n\n
 def neural_network(X, y): \n    # computation of prediction for y using X\n    ...\n    return loss(y, y_pred)\n
\n\n

To make your model work, you need to define placeholders for both X and y and then run a session:

\n\n\n\n
 X = tf.placeholder(tf.float32, shape=(batch_size, x_dim))\n y = tf.placeholder(tf.float32, shape=(batch_size, y_dim))\n
\n\n

Placeholders are something like \"free variables\" which you need to specify when running the session by feed_dict:

\n\n\n\n
 with tf.Session() as sess:\n     # variables need to be initialized before any sess.run() calls\n     tf.global_variables_initializer().run()\n\n     for X_batch, y_batch in generator(data):\n         feed_dict = {X: X_batch, y: y_batch} \n         _, loss_value, ... = sess.run([train_op, loss, ...], feed_dict)\n         # train_op here stands for optimization operation you have defined\n         # and loss for loss function (return value of neural_network function)\n
\n\n

Hope you would find it useful. However, bear in mind this is not fully working implementation but rather a pseudocode since you specified almost no details.

\n", "system": "" }, { "instruction": "How do I find out the version of TensorFlow on my computer?", "input": "", "output": "
import tensorflow as tf\ntf.__version__\n
\n", "system": "" }, { "instruction": "How to create an optimizer in Tensorflow", "input": "", "output": "

The simplest example of an optimizer is probably the gradient descent optimizer. It shows how one creates an instance of the basic optimizer class. The optimizer base class documentation explains what the methods do.

\n\n

The python side of the optimizers adds new nodes to the graph that compute and apply the gradients being back-propagated. It supplies the parameters that get passed to the ops and does some of the high-level management of the optimizer. Then, you need the actual \"Apply\" op.

\n\n

Ops have both a python and a C++ component. Writing a training op is the same (but specialized) as the general process of adding an Op to TensorFlow.

\n\n

For an example set of training ops that compute and apply gradients, see\npython/training/training_ops.py - this is the Python glue for the actual training ops. Note that the code here is mostly about shape inference - the computation is going to be in the C++.

\n\n

The actual math for applying the gradients is handled by an Op (recalling that, in general, ops are written in C++). In this case, the apply gradients ops are defined in core/kernels/training_ops.cc. You can see, for example, the implementation of ApplyGradientDescentOp in there, which references a functor ApplyGradientDescent:

\n\n
var.device(d) -= grad * lr();\n
\n\n

The implementation of the Op itself follows the implementation of any other op as described in the adding-an-op docs.

\n", "system": "" }, { "instruction": "Machine Learning (tensorflow / sklearn) in Django?", "input": "", "output": "

Asynchronous processing

\n\n

If you don't need the classification result from the ML code to pass immediately to the user (e.g. as a response to the same POST request that submtted), then you can always queue the classification job to be ran in the background or even a different server with more CPU/memory resources (e.g. with django-background-tasks or Celery)

\n\n

A queued task would be for example to populate the field UserResponse.class_name (positive, negative) on the database rows that have that field blank (not yet classified)

\n\n

Real time notification

\n\n

If the ML code is slow and want to return that result to the user as soon as it is available, you can use the asynchronous approach described above, and pair with the real time notification (e.g. socket.io to the browser (this can be triggered from the queued task)

\n\n

This becomes necessary if ML execution time is so long that it might time-out the HTTP request in the synchronous approach described below.

\n\n

Synchronous processing, if ML code is not CPU intensive (fast enough)

\n\n

If you need that classification result returned immediately, and the ML classification is fast enough *, you can do so within the HTTP request-response cycle (the POST request returns after the ML code is done, synchronously)

\n\n

*Fast enough here means it wouldn't time-out the HTTP request/response, and the user wouldn't lose patience.

\n", "system": "" }, { "instruction": "ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32", "input": "", "output": "

It sounds like you have defined input_y\u2014which I am assuming is a tf.placeholder()\u2014as having type tf.int32. Either change this to tf.float32 or add a cast: tf.cast(input_y, tf.float32) or tf.to_float(input_y).

\n", "system": "" }, { "instruction": "Choosing from different cost function and activation function of a neural network", "input": "", "output": "

I will answer your questions a little bit out of order, starting with more general answers, and finishing with those specific to your particular experiment.

\n\n

Activation functions Different activation functions, in fact, do have different properties. Let's first consider an activation function between two layers of a neural network. The only purpose of an activation function there is to serve as an nonlinearity. If you do not put an activation function between two layers, then two layers together will serve no better than one, because their effect will still be just a linear transformation. For a long while people were using sigmoid function and tanh, choosing pretty much arbitrarily, with sigmoid being more popular, until recently, when ReLU became the dominant nonleniarity. The reason why people use ReLU between layers is because it is non-saturating (and is also faster to compute). Think about the graph of a sigmoid function. If the absolute value of x is large, then the derivative of the sigmoid function is small, which means that as we propagate the error backwards, the gradient of the error will vanish very quickly as we go back through the layers. With ReLU the derivative is 1 for all positive inputs, so the gradient for those neurons that fired will not be changed by the activation unit at all and will not slow down the gradient descent.

\n\n

For the last layer of the network the activation unit also depends on the task. For regression you will want to use the sigmoid or tanh activation, because you want the result to be between 0 and 1. For classification you will want only one of your outputs to be one and all others zeros, but there's no differentiable way to achieve precisely that, so you will want to use a softmax to approximate it.

\n\n

Your example. Now let's look at your example. Your first example tries to compute the output of AND in a following form:

\n\n
sigmoid(W1 * x1 + W2 * x2 + B)\n
\n\n

Note that W1 and W2 will always converge to the same value, because the output for (x1, x2) should be equal to the output of (x2, x1). Therefore, the model that you are fitting is:

\n\n
sigmoid(W * (x1 + x2) + B)\n
\n\n

x1 + x2 can only take one of three values (0, 1 or 2) and you want to return 0 for the case when x1 + x2 < 2 and 1 for the case when x1 + x2 = 2. Since the sigmoid function is rather smooth, it will take very large values of W and B to make the output close to the desired, but because of a small learning rate they can't get to those large values fast. Increasing the learning rate in your first example will increase the speed of convergence.

\n\n

Your second example converges better because the softmax function is good at making precisely one output be equal to 1 and all others to 0. Since this is precisely your case, it does converge quickly. Note that sigmoid would also eventually converge to good values, but it will take significantly more iterations (or higher learning rate).

\n\n

What to use. Now to the last question, how does one choose which activation and cost functions to use. These advices will work for majority of cases:

\n\n
    \n
  1. If you do classification, use softmax for the last layer's nonlinearity and cross entropy as a cost function.

  2. \n
  3. If you do regression, use sigmoid or tanh for the last layer's nonlinearity and squared error as a cost function.

  4. \n
  5. Use ReLU as a nonlienearity between layers.

  6. \n
  7. Use better optimizers (AdamOptimizer, AdagradOptimizer) instead of GradientDescentOptimizer, or use momentum for faster convergence,

  8. \n
\n", "system": "" }, { "instruction": "Is sparse tensor multiplication implemented in TensorFlow?", "input": "", "output": "

General-purpose multiplication for tf.SparseTensor is not currently implemented in TensorFlow. However, there are three partial solutions, and the right one to choose will depend on the characteristics of your data:

\n\n\n", "system": "" }, { "instruction": "Tensorflow2 warning using @tffunction", "input": "", "output": "

tf.function has some \"peculiarities\". I highly recommend reading this article: https://www.tensorflow.org/tutorials/customization/performance

\n\n

In this case, the problem is that the function is \"retraced\" (i.e. a new graph is built) every time you call with a different input signature. For tensors, input signature refers to shape and dtype, but for Python numbers, every new value is interpreted as \"different\". In this case, because you call the function with a step variable that changes every time, the function is retraced every single time as well. This will be extremely slow for \"real\" code (e.g. calling a model inside the function).

\n\n

You can fix it by simply converting step to a tensor, in which case the different values will not count as a new input signature:

\n\n
for step in range(100):\n    step = tf.convert_to_tensor(step, dtype=tf.int64)\n    my_func(step)\n    writer.flush()\n
\n\n

or use tf.range to get tensors directly:

\n\n
for step in tf.range(100):\n    step = tf.cast(step, tf.int64)\n    my_func(step)\n    writer.flush()\n
\n\n

This should not produce warnings (and be much faster).

\n", "system": "" }, { "instruction": "What is the proper way to weight decay for Adam Optimizer", "input": "", "output": "

Edit: see also this PR which just got merged into TF.

\n\n

When using pure SGD (without momentum) as an optimizer, weight decay is the same thing as adding a L2-regularization term to the loss. When using any other optimizer, this is not true.

\n\n

Weight decay (don't know how to TeX here, so excuse my pseudo-notation):

\n\n
w[t+1] = w[t] - learning_rate * dw - weight_decay * w\n
\n\n

L2-regularization:

\n\n
loss = actual_loss + lambda * 1/2 sum(||w||_2 for w in network_params)\n
\n\n

Computing the gradient of the extra term in L2-regularization gives lambda * w and thus inserting it into the SGD update equation

\n\n
dloss_dw = dactual_loss_dw + lambda * w\nw[t+1] = w[t] - learning_rate * dw\n
\n\n

gives the same as weight decay, but mixes lambda with the learning_rate. Any other optimizer, even SGD with momentum, gives a different update rule for weight decay as for L2-regularization! See the paper Fixing weight decay in Adam for more details. (Edit: AFAIK, this 1987 Hinton paper introduced \"weight decay\", literally as \"each time the weights are updated, their magnitude is also decremented by 0.4%\" at page 10)

\n\n

That being said, there doesn't seem to be support for \"proper\" weight decay in TensorFlow yet. There are a few issues discussing it, specifically because of above paper.

\n\n

One possible way to implement it is by writing an op that does the decay step manually after every optimizer step. A different way, which is what I'm currently doing, is using an additional SGD optimizer just for the weight decay, and \"attaching\" it to your train_op. Both of these are just crude work-arounds, though. My current code:

\n\n
# In the network definition:\nwith arg_scope([layers.conv2d, layers.dense],\n               weights_regularizer=layers.l2_regularizer(weight_decay)):\n    # define the network.\n\nloss = # compute the actual loss of your problem.\ntrain_op = optimizer.minimize(loss, global_step=global_step)\nif args.weight_decay not in (None, 0):\n    with tf.control_dependencies([train_op]):\n        sgd = tf.train.GradientDescentOptimizer(learning_rate=1.0)\n        train_op = sgd.minimize(tf.add_n(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)))\n
\n\n

This somewhat makes use of TensorFlow's provided bookkeeping. Note that the arg_scope takes care of appending an L2-regularization term for every layer to the REGULARIZATION_LOSSES graph-key, which I then all sum up and optimize using SGD which, as shown above, corresponds to actual weight-decay.

\n\n

Hope that helps, and if anyone gets a nicer code snippet for this, or TensorFlow implements it better (i.e. in the optimizers), please share.

\n", "system": "" }, { "instruction": "tensorflow: what's the difference between tf.nn.dropout and tf.layers.dropout", "input": "", "output": "

A quick glance through \ntensorflow/python/layers/core.py and tensorflow/python/ops/nn_ops.py\nreveals that tf.layers.dropout is a wrapper for tf.nn.dropout.

\n\n

The only differences in the two functions are:

\n\n
    \n
  1. The tf.nn.dropout has parameter keep_prob: \"Probability that each element is kept\"
    tf.layers.dropout has parameter rate: \"The dropout rate\"
    Thus, keep_prob = 1 - rate as defined here
  2. \n
  3. The tf.layers.dropout has training parameter: \"Whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched).\"
  4. \n
\n", "system": "" }, { "instruction": "How does asynchronous training work in distributed Tensorflow?", "input": "", "output": "

When you train asynchronously in Distributed TensorFlow, a particular worker does the following:

\n\n
    \n
  1. The worker reads all of the shared model parameters in parallel from the PS task(s), and copies them to the worker task. These reads are uncoordinated with any concurrent writes, and no locks are acquired: in particular the worker may see partial updates from one or more other workers (e.g. a subset of the updates from another worker may have been applied, or a subset of the elements in a variable may have been updated).

  2. \n
  3. The worker computes gradients locally, based on a batch of input data and the parameter values that it read in step 1.

  4. \n
  5. The worker sends the gradients for each variable to the appropriate PS task, and applies the gradients to their respective variable, using an update rule that is determined by the optimization algorithm (e.g. SGD, SGD with Momentum, Adagrad, Adam, etc.). The update rules typically use (approximately) commutative operations, so they may be applied independently on the updates from each worker, and the state of each variable will be a running aggregate of the sequence of updates received.

  6. \n
\n\n

In asynchronous training, each update from the worker is applied concurrently, and the updates may be somewhat coordinated if the optional use_locking=True flag was set when the respective optimizer (e.g. tf.train.GradientDescentOptimizer) was initialized. Note however that the locking here only provides mutual exclusion for two concurrent updates, and (as noted above) reads do not acquire locks; the locking does not provide atomicity across the entire set of updates.

\n\n

(By contrast, in synchronous training, a utility like tf.train.SyncReplicasOptimizer will ensure that all of the workers read the same, up-to-date values for each model parameter; and that all of the updates for a synchronous step are aggregated before they are applied to the underlying variables. To do this, the workers are synchronized by a barrier, which they enter after sending their gradient update, and leave after the aggregated update has been applied to all variables.)

\n", "system": "" }, { "instruction": "Tensorflow cannot open libcuda.so.1", "input": "", "output": "

libcuda.so.1 is a symlink to a file that is specific to the version of your NVIDIA drivers. It may be pointing to the wrong version or it may not exist.

\n\n
# See where the link is pointing.  \nls  /usr/lib/x86_64-linux-gnu/libcuda.so.1 -la\n# My result:\n# lrwxrwxrwx 1 root root 19 Feb 22 20:40 \\\n# /usr/lib/x86_64-linux-gnu/libcuda.so.1 -> ./libcuda.so.375.39\n\n# Make sure it is pointing to the right version. \n# Compare it with the installed NVIDIA driver.\nnvidia-smi\n\n# Replace libcuda.so.1 with a link to the correct version\ncd /usr/lib/x86_64-linux-gnu\nsudo ln -f -s libcuda.so.<yournvidia.version> libcuda.so.1\n
\n\n

Now in the same way, make another symlink from libcuda.so.1 to a link of the same name in your LD_LIBRARY_PATH directory.

\n\n

You may also find that you need to create a link to libcuda.so.1 in /usr/lib/x86_64-linux-gnu named libcuda.so

\n", "system": "" }, { "instruction": "What is the proper way to weight decay for Adam Optimizer", "input": "", "output": "

Edit: see also this PR which just got merged into TF.

\n\n

When using pure SGD (without momentum) as an optimizer, weight decay is the same thing as adding a L2-regularization term to the loss. When using any other optimizer, this is not true.

\n\n

Weight decay (don't know how to TeX here, so excuse my pseudo-notation):

\n\n
w[t+1] = w[t] - learning_rate * dw - weight_decay * w\n
\n\n

L2-regularization:

\n\n
loss = actual_loss + lambda * 1/2 sum(||w||_2 for w in network_params)\n
\n\n

Computing the gradient of the extra term in L2-regularization gives lambda * w and thus inserting it into the SGD update equation

\n\n
dloss_dw = dactual_loss_dw + lambda * w\nw[t+1] = w[t] - learning_rate * dw\n
\n\n

gives the same as weight decay, but mixes lambda with the learning_rate. Any other optimizer, even SGD with momentum, gives a different update rule for weight decay as for L2-regularization! See the paper Fixing weight decay in Adam for more details. (Edit: AFAIK, this 1987 Hinton paper introduced \"weight decay\", literally as \"each time the weights are updated, their magnitude is also decremented by 0.4%\" at page 10)

\n\n

That being said, there doesn't seem to be support for \"proper\" weight decay in TensorFlow yet. There are a few issues discussing it, specifically because of above paper.

\n\n

One possible way to implement it is by writing an op that does the decay step manually after every optimizer step. A different way, which is what I'm currently doing, is using an additional SGD optimizer just for the weight decay, and \"attaching\" it to your train_op. Both of these are just crude work-arounds, though. My current code:

\n\n
# In the network definition:\nwith arg_scope([layers.conv2d, layers.dense],\n               weights_regularizer=layers.l2_regularizer(weight_decay)):\n    # define the network.\n\nloss = # compute the actual loss of your problem.\ntrain_op = optimizer.minimize(loss, global_step=global_step)\nif args.weight_decay not in (None, 0):\n    with tf.control_dependencies([train_op]):\n        sgd = tf.train.GradientDescentOptimizer(learning_rate=1.0)\n        train_op = sgd.minimize(tf.add_n(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)))\n
\n\n

This somewhat makes use of TensorFlow's provided bookkeeping. Note that the arg_scope takes care of appending an L2-regularization term for every layer to the REGULARIZATION_LOSSES graph-key, which I then all sum up and optimize using SGD which, as shown above, corresponds to actual weight-decay.

\n\n

Hope that helps, and if anyone gets a nicer code snippet for this, or TensorFlow implements it better (i.e. in the optimizers), please share.

\n", "system": "" }, { "instruction": "Anaconda ImportError: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found", "input": "", "output": "

I spent a day working on this having encountered the same exact problem working on my research university's computing cluster with the same specs as you, and I finally found the right Stack Overflow thread. None of the above answers here work, unfortunately, but I can say with very high confidence that the details in the linked thread should solve your problem even though the source of the error traceback was different.

\n

To summarize, you'll need to add the path to the lib folder in anaconda to LD_LIBRARY_PATH:

\n
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/your/path/to/conda/env/lib\n
\n

In my case, I just did:

\n
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/anaconda3/lib\n
\n

...and it worked like a charm!

\n", "system": "" }, { "instruction": "How to change python version in Anaconda?", "input": "", "output": "

A better (recommended) alternative is to create a virtual environment of the desired Python version and then use that environment to run Tensorflow and other scripts.

\n\n

To do that, you can follow the instructions given here.

\n\n

BUT, if you don't want to create a separate environment, then conda install python=<version> should do.

\n\n

OR (not recommended) you can download the \"latest\" Anaconda installer with your required Python version bundled.

\n\n

Source

\n", "system": "" }, { "instruction": "Does TensorFlow plan to support OpenCL?", "input": "", "output": "

As part of contrib, you can build Tensorflow with SYCL support.

\n\n

SYCL is \"single source OpenCL\", a new standard from Khronos that allows one to write high level C++ code that can be compiled to run on OpenCL devices.

\n\n

The folks at CodePlay software have been heavily involved in it, and you can see one their blogposts on the topic here.

\n\n

So in short, you're not going to get a pip package of it, you'll need to build it yourself. And the performance might not be as good, since the project is still early days.

\n\n

You can find a tutorial on how to get started here. Bear in mind, this uses CodePlay's propitiatory version of SYCL, but maybe you can get it working with an open implementation such as triSYCL.

\n", "system": "" }, { "instruction": "get the CUDA and CUDNN version on windows with Anaconda installe", "input": "", "output": "

Use the following command to check CUDA installation by Conda:

\n\n
conda list cudatoolkit\n
\n\n

And the following command to check CUDNN version installed by conda:

\n\n
conda list cudnn\n
\n\n

If you want to install/update CUDA and CUDNN through CONDA, please use the following commands:

\n\n
conda install -c anaconda cudatoolkit\nconda install -c anaconda cudnn\n
\n\n

Alternatively you can use following commands to check CUDA installation:

\n\n
nvidia-smi\n
\n\n

OR

\n\n
nvcc --version\n
\n", "system": "" }, { "instruction": "How to know which version of docker image is behind latest tag?", "input": "", "output": "

go to image webpage (nigix in my case) https://hub.docker.com/_/nginx\nthen press tags tab,\ngo to any latest, and copy sha256 sum\nthen sort by newest, then scroll down until first numbered version\nand check if the exact same sha256 is displayed

\n

now ... STILL after that fishing, there library/nginxit comes a sure thing:

\n

you can verify if you did it right, for example now I manage to find that nginx:latest is actually 1.17.8, so, I run:

\n
docker pull nginx:1.17.8\n1.17.8: Pulling from library/nginx\nbc51dd8edc1b: Pull complete\n66ba67045f57: Pull complete\nbf317aa10aa5: Pull complete\nDigest:sha256:ad5552c786f128e389a0263104ae39f3d3c7895579d45ae716f528185b36bc6f\nStatus: Downloaded newer image for nginx:1.17.8\n
\n

and then I verify by atempt to pull latest:

\n
docker pull nginx:latest\nlatest: Pulling from library/nginx\nDigest: sha256:ad5552c786f128e389a0263104ae39f3d3c7895579d45ae716f528185b36bc6f\nStatus: Downloaded newer image for nginx:latest\n
\n

how you can see it didn't actually pull anything, and sha256 is the exact same ;)

\n", "system": "" }, { "instruction": "How to load a tflite model in script?", "input": "", "output": "

You can use TensorFlow Lite Python interpreter to load the tflite model in a python shell, and test it with your input data.

\n\n

The code will be like this:

\n\n
import numpy as np\nimport tensorflow as tf\n\n# Load TFLite model and allocate tensors.\ninterpreter = tf.lite.Interpreter(model_path=\"converted_model.tflite\")\ninterpreter.allocate_tensors()\n\n# Get input and output tensors.\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\n\n# Test model on random input data.\ninput_shape = input_details[0]['shape']\ninput_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)\ninterpreter.set_tensor(input_details[0]['index'], input_data)\n\ninterpreter.invoke()\n\n# The function `get_tensor()` returns a copy of the tensor data.\n# Use `tensor()` in order to get a pointer to the tensor.\noutput_data = interpreter.get_tensor(output_details[0]['index'])\nprint(output_data)\n
\n\n

The above code is from TensorFlow Lite official guide, for more detailed information, read this.

\n", "system": "" }, { "instruction": "What is regularization loss in tensorflow?", "input": "", "output": "

TL;DR: it's just the additional loss generated by the regularization function. Add that to the network's loss and optimize over the sum of the two.

\n\n

As you correctly state, regularization methods are used to help an optimization method to generalize better.\nA way to obtain this is to add a regularization term to the loss function. This term is a generic function, which modifies the \"global\" loss (as in, the sum of the network loss and the regularization loss) in order to drive the optimization algorithm in desired directions.

\n\n

Let's say, for example, that for whatever reason I want to encourage solutions to the optimization that have weights as close to zero as possible. One approach, then, is to add to the loss produced by the network, a function of the network weights (for example, a scaled-down sum of all the absolute values of the weights). Since the optimization algorithm minimizes the global loss, my regularization term (which is high when the weights are far from zero) will push the optimization towards solutions tht have weights close to zero.

\n", "system": "" }, { "instruction": "In TensorFlow, what is the argument 'axis' in the function 'tf.one_hot'", "input": "", "output": "

Here's an example:

\n\n\n\n
x = tf.constant([0, 1, 2])\n
\n\n

... is the input tensor and N=4 (each index is transformed into 4D vector).

\n\n

axis=-1

\n\n

Computing one_hot_1 = tf.one_hot(x, 4).eval() yields a (3, 4) tensor:

\n\n
[[ 1.  0.  0.  0.]\n [ 0.  1.  0.  0.]\n [ 0.  0.  1.  0.]]\n
\n\n

... where the last dimension is one-hot encoded (clearly visible). This corresponds to the default axis=-1, i.e. the last one.

\n\n

axis=0

\n\n

Now, computing one_hot_2 = tf.one_hot(x, 4, axis=0).eval() yields a (4, 3) tensor, which is not immediately recognizable as one-hot encoded:

\n\n
[[ 1.  0.  0.]\n [ 0.  1.  0.]\n [ 0.  0.  1.]\n [ 0.  0.  0.]]\n
\n\n

This is because the one-hot encoding is done along the 0-axis and one has to transpose the matrix to see the previous encoding. The situation becomes more complicated, when the input is higher dimensional, but the idea is the same: the difference is in placement of the extra dimension used for one-hot encoding.

\n", "system": "" }, { "instruction": "Best strategy to reduce false positives: Google's new Object Detection API on Satellite Imagery", "input": "", "output": "

I've revisited this topic recently in my work and thought I'd update with my current learnings for any who visit in the future.

\n\n

The topic appeared on Tensorflow's Models repo issue tracker. SSD allows you to set the ratio of how many negative:postive examples to mine (max_negatives_per_positive: 3), but you can also set a minimum number for images with no postives (min_negatives_per_image: 3). Both of these are defined in the model-ssd-loss config section.

\n\n

That said, I don't see the same option in Faster-RCNN's model configuration. It's mentioned in the issue that models/research/object_detection/core/balanced_positive_negative_sampler.py contains the code used for Faster-RCNN.

\n\n

One other option discussed in the issue is creating a second class specifically for lookalikes. During training, the model will attempt to learn class differences which should help serve your purpose.

\n\n

Lastly, I came across this article on Filter Amplifier Networks (FAN) that may be informative for your work on aerial imagery.

\n\n

===================================================================

\n\n

The following paper describes hard negative mining for the same purpose you describe:\nTraining Region-based Object Detectors with Online Hard Example Mining

\n\n

In section 3.1 they describe using a foreground and background class:

\n\n
\n

Background RoIs. A region is labeled background (bg) if its maximum\n IoU with ground truth is in the interval [bg lo, 0.5). A lower\n threshold of bg lo = 0.1 is used by both FRCN and SPPnet, and is\n hypothesized in [14] to crudely approximate hard negative mining; the\n assumption is that regions with some overlap with the ground truth are\n more likely to be the confusing or hard ones. We show in Section 5.4\n that although this heuristic helps convergence and detection accuracy,\n it is suboptimal because it ignores some infrequent, but important,\n difficult background regions. Our method removes the bg lo threshold.

\n
\n\n

In fact this paper is referenced and its ideas are used in Tensorflow's object detection losses.py code for hard mining:

\n\n
class HardExampleMiner(object):\n\"\"\"Hard example mining for regions in a list of images.\nImplements hard example mining to select a subset of regions to be\nback-propagated. For each image, selects the regions with highest losses,\nsubject to the condition that a newly selected region cannot have\nan IOU > iou_threshold with any of the previously selected regions.\nThis can be achieved by re-using a greedy non-maximum suppression algorithm.\nA constraint on the number of negatives mined per positive region can also be\nenforced.\nReference papers: \"Training Region-based Object Detectors with Online\nHard Example Mining\" (CVPR 2016) by Srivastava et al., and\n\"SSD: Single Shot MultiBox Detector\" (ECCV 2016) by Liu et al.\n\"\"\"\n
\n\n

Based on your model config file, the HardMinerObject is returned by losses_builder.py in this bit of code:

\n\n
def build_hard_example_miner(config,\n                            classification_weight,\n                            localization_weight):\n\"\"\"Builds hard example miner based on the config.\nArgs:\n    config: A losses_pb2.HardExampleMiner object.\n    classification_weight: Classification loss weight.\n    localization_weight: Localization loss weight.\nReturns:\n    Hard example miner.\n\"\"\"\nloss_type = None\nif config.loss_type == losses_pb2.HardExampleMiner.BOTH:\n    loss_type = 'both'\nif config.loss_type == losses_pb2.HardExampleMiner.CLASSIFICATION:\n    loss_type = 'cls'\nif config.loss_type == losses_pb2.HardExampleMiner.LOCALIZATION:\n    loss_type = 'loc'\n\nmax_negatives_per_positive = None\nnum_hard_examples = None\nif config.max_negatives_per_positive > 0:\n    max_negatives_per_positive = config.max_negatives_per_positive\nif config.num_hard_examples > 0:\n    num_hard_examples = config.num_hard_examples\nhard_example_miner = losses.HardExampleMiner(\n    num_hard_examples=num_hard_examples,\n    iou_threshold=config.iou_threshold,\n    loss_type=loss_type,\n    cls_loss_weight=classification_weight,\n    loc_loss_weight=localization_weight,\n    max_negatives_per_positive=max_negatives_per_positive,\n    min_negatives_per_image=config.min_negatives_per_image)\nreturn hard_example_miner\n
\n\n

which is returned by model_builder.py and called by train.py. So basically, it seems to me that simply generating your true positive labels (with a tool like LabelImg or RectLabel) should be enough for the train algorithm to find hard negatives within the same images. The related question gives an excellent walkthrough.

\n\n

In the event you want to feed in data that has no true positives (i.e. nothing should be classified in the image), just add the negative image to your tfrecord with no bounding boxes.

\n", "system": "" }, { "instruction": "Train Tensorflow Object Detection on own dataset", "input": "", "output": "

This assumes the module is already installed. Please refer to their documentation if not.

\n\n

Disclaimer

\n\n

This answer is not meant to be the right or only way of training the object detection module. This is simply I sharing my experience and what has worked for me. I'm open to suggestions and learning more about this as I am still new to ML in general.

\n\n

TL;DR

\n\n
    \n
  1. Create your own PASCAL VOC format dataset
  2. \n
  3. Generate TFRecords from it
  4. \n
  5. Configure a pipeline
  6. \n
  7. Visualize
  8. \n
\n\n

Each section of this answer consists of a corresponding Edit (see below). After reading each section, please read its Edit as well for clarifications. Corrections and tips were added for each section.

\n\n

Tools used

\n\n

LabelImg: A tool for creating PASCAL VOC format annotations.

\n\n

1. Create your own PASCAL VOC dataset

\n\n

PS: For simplicity, the folder naming convention of my answer follows that of Pascal VOC 2012

\n\n

A peek into the May 2012 dataset, you'll notice the folder as having the following structure

\n\n

\n+VOCdevkit\n +VOC2012\n +Annotations\n +ImageSets\n +Action\n +Layout\n +Main\n +Segmentation\n +JPEGImages\n +SegmentationClass\n +SegmentationObject\n

\n\n

For the time being, amendments were made to the following folders:

\n\n

Annotations: This is were all the images' corresponding XML files will be placed in. Use the suggested tool above to create the annotations. Do not worry about <truncated> and <difficulty> tags as they will be ignored by the training and eval binaries.

\n\n

JPEGImages: Location of your actual images. Make sure they are of type JPEG because that's what is currently supported in order to create TFRecords using their provided script.

\n\n

ImageSets->Main: This simply consists of text files. For each class, there exists a corresponding train.txt, trainval.txt and val.txt. Below is a sample of the contents of the aeroplane_train.txt in the VOC 2012 folder

\n\n
2008_000008 -1\n2008_000015 -1\n2008_000019 -1\n2008_000023 -1\n2008_000028 -1\n2008_000033  1\n
\n\n

The structure is basically image name followed by a boolean saying whether the corresponding object exists in that image or not. Take for example image 2008_000008 does not consist of an aeroplane hence marked with a -1 but image 2008_000033 does.

\n\n

I wrote a small Python script to generate these text files. Simply iterate through the image names and assign a 1 or -1 next to them for object existence. I added some randomness among my text files by shuffling the image names.

\n\n

The {classname}_val.txt files consist of the testing validation datasets. Think of this as the test data during training. You want to divide your dataset into training and validation. More info can be found here. The format of these files is similar to that of training.

\n\n

At this point, your folder structure should be

\n\n

\n+VOCdevkit\n +VOC2012\n +Annotations\n --(for each image, generated annotation)\n +ImageSets\n +Main\n --(for each class, generated *classname*_train.txt and *classname*_val.txt)\n +JPEGImages\n --(a bunch of JPEG images)\n

\n\n
\n\n

1.1 Generating label map

\n\n

With the dataset prepared, we need to create the corresponding label maps.\nNavigate to models/object_detection/data and open pascal_label_map.pbtxt.

\n\n

This file consists of a JSON that assigns an ID and name to each item. Make amendments to this file to reflect your desired objects.

\n\n
\n\n

2. Generate TFRecords

\n\n

If you look into their code especially this line, they explicitly grab the aeroplane_train.txt only. For curios minds, here's why. Change this file name to any of your class train text file.

\n\n

Make sure VOCdevkit is inside models/object_detection then you can go ahead and generate the TFRecords.

\n\n

Please go through their code first should you run into any problems. It is self explanatory and well documented.

\n\n
\n\n

3. Pipeline Configuration

\n\n

The instructions should be self explanatory to cover this segment. Sample configs can be found in object_detection/samples/configs.

\n\n

For those looking to train from scratch as I did, just make sure to remove the fine_tune_checkpoint and from_detection_checkpoint nodes. Here's what my config file looked like for reference.

\n\n

From here on you can continue with the tutorial and run the training process.

\n\n
\n\n

4. Visualize

\n\n

Be sure to run the eval in parallel to the training in order to be able to visualize the learning process. To quote Jonathan Huang

\n\n
\n

the best way is to just run the eval.py binary. We typically run this\n binary in parallel to training, pointing it at the directory holding\n the checkpoint that is being trained. The eval.py binary will write\n logs to an eval_dir that you specify which you can then point to\n with Tensorboard.

\n \n

You want to see that the mAP has \"lifted off\" in the first few hours,\n and then you want to see when it converges. It's hard to tell without\n looking at these plots how many steps you need.

\n
\n\n
\n\n

EDIT I (28 July '17):

\n\n

I never expected my response to get this much attention so I decided to come back and review it.

\n\n

Tools

\n\n

For my fellow Apple users, you could actually use RectLabel for annotations.

\n\n

Pascal VOC

\n\n

After digging around, I finally realized that trainval.txt is actually the union of training and validation datasets.

\n\n

Please look at their official development kit to understand the format even better.

\n\n

Label Map Generation

\n\n

At the time of my writing, ID 0 represents none_of_the_above. It is recommended that your IDs start from 1.

\n\n

Visualize

\n\n

After running your evaluation and directed tensorboard to your Eval directory, it'll show you the mAP of each category along with each category's performance. This is good but I like seeing my training data as well in parallel with Eval.

\n\n

To do this, run tensorboard on a different port and point it to your train directory

\n\n
tensorboard --logdir=${PATH_TO_TRAIN} --port=${DESIRED_NUMBER}\n
\n", "system": "" }, { "instruction": "What are the constraints for tensorflow scope names?", "input": "", "output": "

From the TF source:

\n\n
\n

NOTE: This constructor validates the given name. Valid scope

\n \n

names match one of the following regular expressions:

\n\n
[A-Za-z0-9.][A-Za-z0-9_.\\\\-/]* (for scopes at the root)\n[A-Za-z0-9_.\\\\-/]* (for other scopes)\n
\n
\n", "system": "" }, { "instruction": "Add L2 regularization when using high level tf.layers", "input": "", "output": "

You can pass them into tf.layers.conv2d as arguments:

\n\n
regularizer = tf.contrib.layers.l2_regularizer(scale=0.1)\nlayer2 = tf.layers.conv2d(\n    inputs,\n    filters,\n    kernel_size,\n    kernel_regularizer=regularizer)\n
\n\n

Then you should add the regularization loss to your loss like this:

\n\n
l2_loss = tf.losses.get_regularization_loss()\nloss += l2_loss\n
\n\n

Edit: Thanks Zeke Arneodo, Tom and srcolinas I added, the last bit on your feedback so that the accepted answer provides the complete solution.

\n", "system": "" }, { "instruction": "What does the function control_dependencies do?", "input": "", "output": "

control_dependencies is not a conditional. It is a mechanism to add dependencies to whatever ops you create in the with block. More specifically, what you specify in the argument to control_dependencies is ensured to be evaluated before anything you define in the with block.

\n\n

In your example, you don't add any (TensorFlow) operations in the with block, so the block does nothing.

\n\n

This answer has an example of how to use control_dependencies, where it is used to make sure the assignments happen before the batchnorm operations are evaluated.

\n", "system": "" }, { "instruction": "ImportError: libcudnn when running a TensorFlow program", "input": "", "output": "

Just download cuDNN 5.1 and follow the steps (Tested on Ubuntu 16.04, CUDA toolkit 8.0 )

\n\n
$ tar xvzf cudnn-8.0-linux-x64-v5.1-ga.tgz\n$ sudo cp -P cuda/include/cudnn.h /usr/local/cuda/include\n$ sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda/lib64\n$ sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*\n
\n\n

Now set Path variables

\n\n
$ vim ~/.bashrc\n\nexport LD_LIBRARY_PATH=\"$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64\"\nexport CUDA_HOME=/usr/local/cuda\n
\n\n

and done

\n\n

For more details, you can check this site

\n", "system": "" }, { "instruction": "Why would I ever use tf.concat instead of tf.stack?", "input": "", "output": "

Actually, I've misunderstood how tf.stack works. If the axis parameter is within the range of the existing dimensions, a new axis will be inserted at that index.

\n\n

Example:

\n\n
import tensorflow as tf\n\nt1 = tf.random_normal([1, 3])\nt2 = tf.random_normal([1, 3])\n\ntf.stack([t1, t2], axis=1).shape.as_list() == [1, 2, 3]\ntf.concat([t1, t2], axis=1).shape.as_list() == [1, 6]\n
\n", "system": "" }, { "instruction": "Cast string to float is not supported in Linear Model", "input": "", "output": "

I had the exact same problem, you need to make sure that the input data you are feeding the model is in the right format. ( not just the features but also the label column)

\n\n

My problem was that i was not skipping the first row in the data file, so i was trying to convert the titles to float format.Something as simple as adding

\n\n
skiprows=1\n
\n\n

When reading the csv:

\n\n
df_test = pd.read_csv(test_file, names=COLUMNS_TEST, skipinitialspace=True, skiprows=1, engine=\"python\")\n
\n\n

I would recommend you to check:

\n\n
df_test.dtypes\n
\n\n

You should get something like

\n\n
Feature1      int64\nFeature2      int64\nFeature3      int64\nFeature4      object\nFeature5      object\nFeature6      float64\ndtype: object\n
\n\n

If you are not getting the correct dtype then the model.fit is going to fail

\n", "system": "" }, { "instruction": "How to interpret TensorFlow output?", "input": "", "output": "

About NUMA -- https://software.intel.com/en-us/articles/optimizing-applications-for-numa

\n\n

Roughly speaking, if you have dual-socket CPU, they will each have their own memory and have to access the other processor's memory through a slower QPI link. So each CPU+memory is a NUMA node.

\n\n

Potentially you could treat two different NUMA nodes as two different devices and structure your network to optimize for different within-node/between-node bandwidth

\n\n

However, I don't think there's enough wiring in TF right now to do this right now. The detection doesn't work either -- I just tried on a machine with 2 NUMA nodes, and it still printed the same message and initialized to 1 NUMA node.

\n\n

DMA = Direct Memory Access. You could potentially copy things from one GPU to another GPU without utilizing CPU (ie, through NVlink). NVLink integration isn't there yet.

\n\n

As far as the error, TensorFlow tries to allocate close to GPU max memory so it sounds like some of your GPU memory is already been allocated to something else and the allocation failed.

\n\n

You can do something like below to avoid allocating so much memory

\n\n
config = tf.ConfigProto(log_device_placement=True)\nconfig.gpu_options.per_process_gpu_memory_fraction=0.3 # don't hog all vRAM\nconfig.operation_timeout_in_ms=15000   # terminate on long hangs\nsess = tf.InteractiveSession(\"\", config=config)\n
\n", "system": "" }, { "instruction": "TensorFlow: Opening log data written by SummaryWriter", "input": "", "output": "

As of March 2017, the EventAccumulator tool has been moved from Tensorflow core to the Tensorboard Backend. You can still use it to extract data from Tensorboard log files as follows:

\n\n
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator\nevent_acc = EventAccumulator('/path/to/summary/folder')\nevent_acc.Reload()\n# Show all tags in the log file\nprint(event_acc.Tags())\n\n# E. g. get wall clock, number of steps and value for a scalar 'Accuracy'\nw_times, step_nums, vals = zip(*event_acc.Scalars('Accuracy'))\n
\n", "system": "" }, { "instruction": "What is the TensorFlow checkpoint meta file?", "input": "", "output": "

This file contains a serialized MetaGraphDef protocol buffer. The MetaGraphDef is designed as a serialization format that includes all of the information required to restore a training or inference process (including the GraphDef that describes the dataflow, and additional annotations that describe the variables, input pipelines, and other relevant information). For example, the MetaGraphDef is used by TensorFlow Serving to start an inference service based on your trained model. We are investigating other tools that could use the MetaGraphDef for training.

\n\n

Assuming that you still have the Python code for your model, you do not need the MetaGraphDef to restore the model, because you can reconstruct all of the information in the MetaGraphDef by re-executing the Python code that builds the model. To restore from a checkpoint, you only need the checkpoint files that contain the trained weights, which are written periodically to the same directory.

\n", "system": "" }, { "instruction": "How can I run a loop with a tensor as its range? (in tensorflow)", "input": "", "output": "

To do this you will need to use the tensorflow while loop (tf.while_loop) as follows:

\n\n
i = tf.constant(0)\nwhile_condition = lambda i: tf.less(i, input_placeholder[1, 1])\ndef body(i):\n    # do something here which you want to do in your loop\n    # increment i\n    return [tf.add(i, 1)]\n\n# do the loop:\nr = tf.while_loop(while_condition, body, [i])\n
\n", "system": "" }, { "instruction": "Why is Tensorflow 100x slower than convnetjs in this simple NN example?", "input": "", "output": "

There could be many reasons why:

\n\n\n\n

The real benefits of Tensorflow will come when the distributed version will be public. Then the ability to run big networks on many nodes will be more important than the speed of a single node.

\n", "system": "" }, { "instruction": "Is it possible to modify an existing TensorFlow computation graph?", "input": "", "output": "

The TensorFlow tf.Graph class is an append-only data structure, which means that you can add nodes to the graph after executing part of the graph, but you cannot remove or modify existing nodes. Since TensorFlow executes only the necessary subgraph when you call Session.run(), there is no execution-time cost to having redundant nodes in the graph (although they will continue to consume memory).

\n\n

To remove all nodes in the graph, you can create a session with a new graph:

\n\n
with tf.Graph().as_default():  # Create a new graph, and make it the default.\n  with tf.Session() as sess:  # `sess` will use the new, currently empty, graph.\n    # Build graph and execute nodes in here.\n
\n", "system": "" }, { "instruction": "TensorFlow Training", "input": "", "output": "

In the first training version, you are training the entire batch of training data at once, which means that the first and the 3000th element of spec_train will be processed using the same model parameters in a single step. This is known as (Batch) Gradient Descent.

\n\n

In the second training version, you are training a single example from the training data at once, which means that the 3000th element of spec_train will be processed using model parameters that have been updated 2999 times since the first element was most recently processed. This is known as Stochastic Gradient Descent (or it would be if the element was selected at random).

\n\n

In general, TensorFlow is used with datasets that are too large to process in one batch, so mini-batch SGD (where a subset of the examples are processed in one step) is favored. Processing a single element at a time is theoretically desirable, but is inherently sequential and has high fixed costs because the matrix multiplications and other operations are not as computationally dense. Therefore, processing a small batch (e.g. 32 or 128) of examples at once is the usual approach, with multiple replicas training on different batches in parallel.

\n\n

See this Stats StackExchange question for a more theoretical discussion of when you should use one approach versus the other.

\n", "system": "" }, { "instruction": "Tensorflow indexing with boolean tensor", "input": "", "output": "

Try:

\n\n
ones = tf.ones_like(x) # create a tensor all ones\nmask = tf.greater(x, ones) # boolean tensor, mask[i] = True iff x[i] > 1\nslice_y_greater_than_one = tf.boolean_mask(y, mask)\n
\n\n

See tf.boolean_mask

\n\n

EDIT: another (better ?) way to do it:

\n\n
import tensorflow as tf\n\nx = tf.constant([1, 2, 0, 4])\ny = tf.Variable([1, 2, 0, 4])\nmask = x > 1\nslice_y_greater_than_one = tf.boolean_mask(y, mask)\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    print (sess.run(slice_y_greater_than_one)) # [2 4]\n
\n", "system": "" }, { "instruction": "How can I copy a variable in tensorflow", "input": "", "output": "

You asked how to copy a variable in the title, but how to copy a tensor in the question. Let's look at the different possible answers.

\n\n

(1) You want to create a tensor that has the same value that is currently stored in a variable that we'll call var.

\n\n
tensor = tf.identity(var)\n
\n\n

But remember, 'tensor' is a graph node that will have that value when evaluated, and any time you evaluate it, it will grab the current value of var. You can play around with control flow ops such as with_dependencies() to see the ordering of updates to the variable and the timing of the identity.

\n\n

(2) You want to create another variable and set its value to the value currently stored in a variable:

\n\n
import tensorflow as tf\nvar = tf.Variable(0.9)\nvar2 = tf.Variable(0.0)\ncopy_first_variable = var2.assign(var)\ninit = tf.initialize_all_variables()\nsess = tf.Session()\n\nsess.run(init)\n\nprint sess.run(var2)\nsess.run(copy_first_variable)\nprint sess.run(var2)\n
\n\n

(3) You want to define a variable and set its starting value to the same thing you already initialized a variable to (this is what nivwu.. above answered):

\n\n
var2 = tf.Variable(var.initialized_value())\n
\n\n

var2 will get initialized when you call tf.initialize_all_variables. You can't use this to copy var after you've already initialized the graph and started running things.

\n", "system": "" }, { "instruction": "TensorFlow on 32-bit Linux?", "input": "", "output": "

We have only tested the TensorFlow distribution on 64-bit Linux and Mac OS X, and distribute binary packages for those platforms only. Try following the source installation instructions to build a version for your platform.

\n\n

EDIT: One user has published instructions for running TensorFlow on a 32-bit ARM processor, which is promising for other 32-bit architectures. These instructions may have useful pointers for getting TensorFlow and Bazel to work in a 32-bit environment.

\n", "system": "" }, { "instruction": "Neither PyTorch nor TensorFlow >= 2.0 have been found.Models won't be available and only tokenizers, configuration and file/data utilities can be used", "input": "", "output": "

You need one of them PyTorch or Tensorflow.

\n

You can check if tensorflow is installed or you can reinstall it

\n
    \n
  1. pip uninstall tensorflow
  2. \n
  3. pip install tensorflow==2.2.0(you can install only tensorflow it worked same as tensorflow-gpu)
  4. \n
  5. pip uninstall transformers
  6. \n
  7. pip install transformers==3.3.1
  8. \n
\n

If this doesn't solve it, try to upgrade your python to 3.7.8

\n", "system": "" }, { "instruction": "How to run tensorflow with gpu support in docker-compose?", "input": "", "output": "

I agree that installing all tensorflow-gpu dependencies is rather painful. Fortunately, it's rather easy with Docker, as you only need NVIDIA Driver and NVIDIA Container Toolkit (a sort of a plugin). The rest (CUDA, cuDNN) Tensorflow images have inside, so you don't need them on the Docker host.

\n

The driver can be deployed as a container too, but I do not recommend that for a workstation. It is meant to be used on servers where there is no GUI (X-server, etc). The subject of containerized driver is covered at the end of this post, for now let's see how to start tensorflow-gpu with docker-compose. The process is the same regardless of whether you have the driver in container or not.

\n

How to launch Tensorflow-GPU with docker-compose

\n

Prerequisites:

\n\n

To enable GPU support for a container you need to create the container with NVIDIA Container Toolkit. There are two ways you can do that:

\n
    \n
  1. You can configure Docker to always use nvidia container runtime. It is fine to do so as it works just as the default runtime unless some NVIDIA-specific environment variables are present (more on that later). This is done by placing "default-runtime": "nvidia" into Docker's daemon.json:
  2. \n
\n

/etc/docker/daemon.json:

\n
{\n  "runtimes": {\n      "nvidia": {\n          "path": "/usr/bin/nvidia-container-runtime",\n          "runtimeArgs": []\n      }\n  },\n  "default-runtime": "nvidia"\n}\n
\n
    \n
  1. You can select the runtime during container creation. With docker-compose it is only possible with format version 2.3.
  2. \n
\n

Here is a sample docker-compose.yml to launch Tensorflow with GPU:

\n
version: "2.3"  # the only version where 'runtime' option is supported\n\nservices:\n  test:\n    image: tensorflow/tensorflow:2.3.0-gpu\n    # Make Docker create the container with NVIDIA Container Toolkit\n    # You don't need it if you set 'nvidia' as the default runtime in\n    # daemon.json.\n    runtime: nvidia\n    # the lines below are here just to test that TF can see GPUs\n    entrypoint:\n      - /usr/local/bin/python\n      - -c\n    command:\n      - "import tensorflow as tf; tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)"\n
\n

By running this with docker-compose up you should see a line with the GPU specs in it. It appears at the end and looks like this:

\n
\n

test_1 | 2021-01-23 11:02:46.500189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/device:GPU:0 with 1624 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)

\n
\n

And that is all you need to launch an official Tensorflow image with GPU.

\n

NVIDIA Environment Variables and custom images

\n

As I mentioned, NVIDIA Container Toolkit works as the default runtime unless some variables are present. These are listed and explained here. You only need to care about them if you build a custom image and want to enable GPU support in it. Official Tensorflow images with GPU have them inherited from CUDA images they use a base, so you only need to start the image with the right runtime as in the example above.

\n

If you are interested in customising a Tensorflow image, I wrote another post on that.

\n

Host Configuration for NVIDIA driver in container

\n

As mentioned in the beginning, this is not something you want on a workstation. The process require you to start the driver container when no other display driver is loaded (that is via SSH, for example). Furthermore, at the moment of writing only Ubuntu 16.04, Ubuntu 18.04 and Centos 7 were supported.

\n

There is an official guide and below are extractions from it for Ubuntu 18.04.

\n
    \n
  1. Edit 'root' option in NVIDIA Container Toolkit settings:
  2. \n
\n
sudo sed -i 's/^#root/root/' /etc/nvidia-container-runtime/config.toml\n
\n
    \n
  1. Disable the Nouveau driver modules:
  2. \n
\n
sudo tee /etc/modules-load.d/ipmi.conf <<< "ipmi_msghandler" \\\n  && sudo tee /etc/modprobe.d/blacklist-nouveau.conf <<< "blacklist nouveau" \\\n  && sudo tee -a /etc/modprobe.d/blacklist-nouveau.conf <<< "options nouveau modeset=0"\n
\n

If you are using an AWS kernel, ensure that the i2c_core kernel module is enabled:

\n
sudo tee /etc/modules-load.d/ipmi.conf <<< "i2c_core"\n
\n
    \n
  1. Update the initramfs:
  2. \n
\n
sudo update-initramfs -u\n
\n

Now it's time to reboot for the changes to take place. After reboot check that no nouveau or nvidia modules are loaded. The commands below should return nothing:

\n
lsmod | grep nouveau\nlsmod | grep nvidia\n
\n

Starting driver in container

\n

The guide offers a command to run the driver, I prefer docker-compose. Save the following as driver.yml:

\n
version: "3.0"\nservices:\n  driver:\n    image: nvidia/driver:450.80.02-ubuntu18.04\n    privileged: true\n    restart: unless-stopped\n    volumes:\n    - /run/nvidia:/run/nvidia:shared\n    - /var/log:/var/log\n    pid: "host"\n    container_name: nvidia-driver\n
\n

Use docker-compose -f driver.yml up -d to start the driver container. It will take a couple of minutes to compile modules for your kernel. You may use docker logs nvidia-driver -f to overview the process, wait for 'Done, now waiting for signal' line to appear. Otherwise use lsmod | grep nvidia to see if the driver modules are loaded. When it's ready you should see something like this:

\n
nvidia_modeset       1183744  0\nnvidia_uvm            970752  0\nnvidia              19722240  17 nvidia_uvm,nvidia_modeset\n
\n", "system": "" }, { "instruction": "How to improve data input pipeline performance?", "input": "", "output": "

Mentioning the Solution and the Important observations of @AlexisBRENON in the Answer Section, for the benefit of the Community.

\n\n

Below mentioned are the Important Observations:

\n\n
    \n
  1. According to this GitHub issue, the TFRecordDataset interleaving is a legacy one, so interleave function is better.
  2. \n
  3. batch before map is a good habit (vectorizing your function) and reduce the number of times the mapped function is called.
  4. \n
  5. No need of repeat anymore. Since TF2.0, the Keras model API supports the dataset API and can use cache (see the SO post)
  6. \n
  7. Switch from a VarLenFeature to a FixedLenSequenceFeature, removing a useless call to tf.sparse.to_dense.
  8. \n
\n\n

Code for the Pipeline, with improved performance, in line with above observations is mentioned below:

\n\n
def build_dataset(file_pattern):\n    tf.data.Dataset.list_files(\n        file_pattern\n    ).interleave(\n        TFRecordDataset,\n        cycle_length=tf.data.experimental.AUTOTUNE,\n        num_parallel_calls=tf.data.experimental.AUTOTUNE\n    ).shuffle(\n        2048\n    ).batch(\n        batch_size=64,\n        drop_remainder=True,\n    ).map(\n        map_func=parse_examples_batch,\n        num_parallel_calls=tf.data.experimental.AUTOTUNE\n    ).cache(\n    ).prefetch(\n        tf.data.experimental.AUTOTUNE\n    )\n\ndef parse_examples_batch(examples):\n    preprocessed_sample_columns = {\n        \"features\": tf.io.FixedLenSequenceFeature((), tf.float32, allow_missing=True),\n        \"booleanFeatures\": tf.io.FixedLenFeature((), tf.string, \"\"),\n        \"label\": tf.io.FixedLenFeature((), tf.float32, -1)\n    }\n    samples = tf.io.parse_example(examples, preprocessed_sample_columns)\n    bits_to_float = tf.io.decode_raw(samples[\"booleanFeatures\"], tf.uint8)\n    return (\n        (samples['features'], bits_to_float),\n        tf.expand_dims(samples[\"label\"], 1)\n    )\n
\n", "system": "" }, { "instruction": "How to restore Tensorflow model from .pb file in python?", "input": "", "output": "

The following code will read the model and print out the names of the nodes in the graph.

\n\n
import tensorflow as tf\nfrom tensorflow.python.platform import gfile\nGRAPH_PB_PATH = './frozen_model.pb'\nwith tf.Session() as sess:\n   print(\"load graph\")\n   with gfile.FastGFile(GRAPH_PB_PATH,'rb') as f:\n       graph_def = tf.GraphDef()\n   graph_def.ParseFromString(f.read())\n   sess.graph.as_default()\n   tf.import_graph_def(graph_def, name='')\n   graph_nodes=[n for n in graph_def.node]\n   names = []\n   for t in graph_nodes:\n      names.append(t.name)\n   print(names)\n
\n\n

You are freezing the graph properly that is why you are getting different results basically weights are not getting stored in your model. You can use the freeze_graph.py (link) for getting a correctly stored graph.

\n", "system": "" }, { "instruction": "Can not use both bias and batch normalization in convolution layers", "input": "", "output": "

Batchnormalization already includes the addition of the bias term. Recap that BatchNorm is already:

\n\n
gamma * normalized(x) + bias\n
\n\n

So there is no need (and it makes no sense) to add another bias term in the convolution layer. Simply speaking BatchNorm shifts the activation by their mean values. Hence, any constant will be canceled out.

\n\n

If you still want to do this, you need to remove the normalizer_fn argument and add BatchNorm as a single layer. Like I said, this makes no sense.

\n\n

But the solution would be something like

\n\n
net = slim.conv2d(net, normalizer_fn=None, ...)\nnet = tf.nn.batch_normalization(net)\n
\n\n

Note, the BatchNorm relies on non-gradient updates. So you either need to use an optimizer which is compatible with the UPDATE_OPS collection. Or you need to manually add tf.control_dependencies.

\n\n

Long story short: Even if you implement the ConvWithBias+BatchNorm, it will behave like ConvWithoutBias+BatchNorm. It is the same as multiple fully-connected layers without activation function will behave like a single one.

\n", "system": "" }, { "instruction": "How can I use tensorflow serving for multiple models", "input": "", "output": "

Built a docker image from official tensorflow serving docker file

\n\n

Then inside docker image.

\n\n
/usr/local/bin/tensorflow_model_server --port=9000 --model_config_file=/serving/models.conf\n
\n\n

here /serving/models.conf is a similar file as yours.

\n", "system": "" }, { "instruction": "TensorFlow: Is there a way to measure FLOPS for a model?", "input": "", "output": "

I would like to build on Tobias Schnek's answer as well as answering the original question: how to get FLOP from a pb file.

\n\n

Running the first snippet of code from Tobias answer with TensorFlow 1.6.0

\n\n
g = tf.Graph()\nrun_meta = tf.RunMetadata()\nwith g.as_default():\n    A = tf.Variable(tf.random_normal([25,16]))\n    B = tf.Variable(tf.random_normal([16,9]))\n    C = tf.matmul(A,B)\n\n    opts = tf.profiler.ProfileOptionBuilder.float_operation()    \n    flops = tf.profiler.profile(g, run_meta=run_meta, cmd='op', options=opts)\n    if flops is not None:\n        print('Flops should be ~',2*25*16*9)\n        print('TF stats gives',flops.total_float_ops)\n
\n\n

We get the following ouput:

\n\n
Flops should be ~ 7200\nTF stats gives 8288\n
\n\n

So, why do we get 8288 instead of the expected result 7200=2*25*16*9[a]? The answer is in the way the tensors A and B are initialised. Initialising with a Gaussian distribution costs some FLOP. Changing the definition of A and B by

\n\n
    A = tf.Variable(initial_value=tf.zeros([25, 16]))\n    B = tf.Variable(initial_value=tf.zeros([16, 9]))\n
\n\n

gives the expected output 7200.

\n\n

Usually, a network's variables are initialised with Gaussian distributions among other schemes. Most of the time, we are not interested by the initialisation FLOP as they are done once during initialisation and do not happen during the training nor the inference. So, how could one get the exact number of FLOP disregarding the initialisation FLOP?

\n\n

Freeze the graph with a pb. Calculating the FLOP from a pb file was, actually, the OP's use case.

\n\n

The following snippet illustrates this:

\n\n
import tensorflow as tf\nfrom tensorflow.python.framework import graph_util\n\ndef load_pb(pb):\n    with tf.gfile.GFile(pb, \"rb\") as f:\n        graph_def = tf.GraphDef()\n        graph_def.ParseFromString(f.read())\n    with tf.Graph().as_default() as graph:\n        tf.import_graph_def(graph_def, name='')\n        return graph\n\n# ***** (1) Create Graph *****\ng = tf.Graph()\nsess = tf.Session(graph=g)\nwith g.as_default():\n    A = tf.Variable(initial_value=tf.random_normal([25, 16]))\n    B = tf.Variable(initial_value=tf.random_normal([16, 9]))\n    C = tf.matmul(A, B, name='output')\n    sess.run(tf.global_variables_initializer())\n    flops = tf.profiler.profile(g, options = tf.profiler.ProfileOptionBuilder.float_operation())\n    print('FLOP before freezing', flops.total_float_ops)\n# *****************************        \n\n# ***** (2) freeze graph *****\noutput_graph_def = graph_util.convert_variables_to_constants(sess, g.as_graph_def(), ['output'])\n\nwith tf.gfile.GFile('graph.pb', \"wb\") as f:\n    f.write(output_graph_def.SerializeToString())\n# *****************************\n\n\n# ***** (3) Load frozen graph *****\ng2 = load_pb('./graph.pb')\nwith g2.as_default():\n    flops = tf.profiler.profile(g2, options = tf.profiler.ProfileOptionBuilder.float_operation())\n    print('FLOP after freezing', flops.total_float_ops)\n
\n\n

outputs

\n\n
FLOP before freezing 8288\nFLOP after freezing 7200\n
\n\n
\n\n

[a] Usually the FLOP of a matrix multiplication are mq(2p -1) for the product AB where A[m, p] and B[p, q] but TensorFlow returns 2mpq for some reason. An issue has been opened to understand why.

\n", "system": "" }, { "instruction": "When global_variables_initializer() is actually required", "input": "", "output": "

tf.global_variables_initializer is a shortcut to initialize all global variables. It is not required, and you can use other ways to initialize your variables or in case of easy scripts sometimes you do not need to initialize them at all.

\n\n

Everything except of variables do not require initialization (constants and placeholders). But every used variable (even if it is a constant) should be initialized. This will give you an error, although z is just 0-d tensor with only one number.

\n\n
import tensorflow as tf\nz = tf.Variable(4)\nwith tf.Session() as session:\n        print(session.run(z)) \n
\n\n

I highlighted the word used, because if you just have variables which are not run (or non of the runs depends on them) you do not need to initialize them.

\n\n
\n\n

For example this code will execute without any problems, nonetheless it has 2 variables and one operation which depends on them. But the run does not require them.

\n\n
import tensorflow as tf\nx = tf.constant(35, name='x')\ny = tf.Variable(x + 5, name='y')\nz = tf.Variable(4)\na = y + z\nwith tf.Session() as session:\n        print(\"x = \", session.run(x)) \n
\n", "system": "" }, { "instruction": "how to install tensorflow on anaconda python 3.6", "input": "", "output": "

UPDATE: TensorFlow supports Python 3.6 on Windows since version 1.2.0 (see the release notes)

\n\n
\n\n

TensorFlow only supports Python 3.5 64-bit as of now. Support for Python 3.6 is a work in progress and you can track it here as well as chime in the discussion.

\n\n

The only alternative to use Python 3.6 with TensorFlow on Windows currently is building TF from source.

\n\n

If you don't want to uninstall your Anaconda distribution for Python 3.6 and install a previous release you can create a conda environment for Python=3.5 as in:\n\nconda create --name tensorflow python=3.5\nactivate tensorflow\npip install tensorflow-gpu\n

\n", "system": "" }, { "instruction": "Tensorflow - ValueError: Parent directory of trained_variables.ckpt doesn't exist, can't save", "input": "", "output": "
saver.save(sess, \"./trained_variables.ckpt\")\n
\n", "system": "" }, { "instruction": "Python: rewrite a looping numpy math function to run on GPU", "input": "", "output": "

Introduction and solution code

\n\n

Well, you asked for it! So, listed in this post is an implementation with PyCUDA that uses lightweight wrappers extending most of CUDA's capabilities within Python environment. We will its SourceModule functionality that lets us write and compile CUDA kernels staying in Python environment.

\n\n

Getting to the problem at hand, among the computations involved, we have sliding maximum and minimum, few differences and divisions and comparisons. For the maximum and minimum parts, that involves block max finding (for each sliding window), we will use reduction-technique as discussed in some detail here. This would be done at block level. For the upper level iterations across sliding windows, we would use the grid level indexing into CUDA resources. For more info on this block and grid format, please refer to page-18. PyCUDA also supports builtins for computing reductions like max and min, but we lose control, specifically we intend to use specialized memory like shared and constant memory for leveraging GPU at its near to optimum level.

\n\n

Listing out the PyCUDA-NumPy solution code -

\n\n

1] PyCUDA part -

\n\n
import pycuda.autoinit\nimport pycuda.driver as drv\nimport numpy as np\nfrom pycuda.compiler import SourceModule\n\nmod = SourceModule(\"\"\"\n#define TBP 1024 // THREADS_PER_BLOCK\n\n__device__ void get_Bmax_Cmin(float* out, float *d1, float *d2, int L, int offset)\n{\n    int tid = threadIdx.x;\n    int inv = TBP;\n    __shared__ float dS[TBP][2];\n\n    dS[tid][0] = d1[tid+offset];  \n    dS[tid][1] = d2[tid+offset];         \n    __syncthreads();\n\n    if(tid<L-TBP)  \n    {\n        dS[tid][0] = fmaxf(dS[tid][0] , d1[tid+inv+offset]);\n        dS[tid][1] = fminf(dS[tid][1] , d2[tid+inv+offset]);\n    }\n    __syncthreads();\n    inv = inv/2;\n\n    while(inv!=0)   \n    {\n        if(tid<inv)\n        {\n            dS[tid][0] = fmaxf(dS[tid][0] , dS[tid+inv][0]);\n            dS[tid][1] = fminf(dS[tid][1] , dS[tid+inv][1]);\n        }\n        __syncthreads();\n        inv = inv/2;\n    }\n    __syncthreads();\n\n    if(tid==0)\n    {\n        out[0] = dS[0][0];\n        out[1] = dS[0][1];\n    }   \n    __syncthreads();\n}\n\n__global__ void main1(float* out, float *d0, float *d1, float *d2, float *d3, float *lowL, float *highL, int *BLOCKLEN)\n{\n    int L = BLOCKLEN[0];\n    int tid = threadIdx.x;\n    int iterID = blockIdx.x;\n    float Bmax_Cmin[2];\n    int inv;\n    float Cmin, dif;   \n    __shared__ float dS[TBP*2];   \n\n    get_Bmax_Cmin(Bmax_Cmin, d1, d2, L, iterID);  \n    Cmin = Bmax_Cmin[1];\n    dif = (Bmax_Cmin[0] - Cmin);\n\n    inv = TBP;\n\n    dS[tid] = (d0[tid+iterID] + d1[tid+iterID] + d2[tid+iterID] + d3[tid+iterID] - 4.0*Cmin) / (4.0*dif);\n    __syncthreads();\n\n    if(tid<L-TBP)  \n        dS[tid+inv] = (d0[tid+inv+iterID] + d1[tid+inv+iterID] + d2[tid+inv+iterID] + d3[tid+inv+iterID] - 4.0*Cmin) / (4.0*dif);                   \n\n     dS[tid] = ((dS[tid] >= lowL[tid]) & (dS[tid] <= highL[tid])) ? 1 : 0;\n     __syncthreads();\n\n     if(tid<L-TBP)\n         dS[tid] += ((dS[tid+inv] >= lowL[tid+inv]) & (dS[tid+inv] <= highL[tid+inv])) ? 1 : 0;\n     __syncthreads();\n\n    inv = inv/2;\n    while(inv!=0)   \n    {\n        if(tid<inv)\n            dS[tid] += dS[tid+inv];\n        __syncthreads();\n        inv = inv/2;\n    }\n\n    if(tid==0)\n        out[iterID] = dS[0];\n    __syncthreads();\n\n}\n\"\"\")\n
\n\n

Please note that THREADS_PER_BLOCK, TBP is to be set based on the batchSize. The rule of thumb here is to assign power of 2 value to TBP that is just lesser than batchSize. Thus, for batchSize = 2000, we needed TBP as 1024.

\n\n

2] NumPy part -

\n\n
def gpu_app_v1(A, B, C, D, batchSize, minimumLimit):\n    func1 = mod.get_function(\"main1\")\n    outlen = len(A)-batchSize+1\n\n    # Set block and grid sizes\n    BSZ = (1024,1,1)\n    GSZ = (outlen,1)\n\n    dest = np.zeros(outlen).astype(np.float32)\n    N = np.int32(batchSize)\n    func1(drv.Out(dest), drv.In(A), drv.In(B), drv.In(C), drv.In(D), \\\n                     drv.In(data2b), drv.In(data2a),\\\n                     drv.In(N), block=BSZ, grid=GSZ)\n    idx = np.flatnonzero(dest >= minimumLimit)\n    return idx, dest[idx]\n
\n\n

Benchmarking

\n\n

I have tested on GTX 960M. Please note that PyCUDA expects arrays to be of contiguous order. So, we need to slice the columns and make copies. I am expecting/assuming that the data could be read from the files such that the data is spread along rows instead of being as columns. Thus, keeping those out of the benchmarking function for now.

\n\n

Original approach -

\n\n
def org_app(data1, batchSize, minimumLimit):\n    resultArray = []\n    for rowNr in  range(data1.shape[0]-batchSize+1):\n        tmp_df = data1[rowNr:rowNr + batchSize] #rolling window\n        result = doTheMath(tmp_df, data2a, data2b)\n        if (result >= minimumLimit):\n            resultArray.append([rowNr , result]) \n    return resultArray\n
\n\n

Timings and verification -

\n\n
In [2]: #Declare variables\n   ...: batchSize = 2000\n   ...: sampleSize = 50000\n   ...: resultArray = []\n   ...: minimumLimit = 490 #use 400 on the real sample data\n   ...: \n   ...: #Create Random Sample Data\n   ...: data1 = np.random.uniform(1, 100000, (sampleSize + batchSize, 4)).astype(np.float32)\n   ...: data2b = np.random.uniform(0, 1, (batchSize)).astype(np.float32)\n   ...: data2a = data2b + np.random.uniform(0, 1, (batchSize)).astype(np.float32)\n   ...: \n   ...: # Make column copies\n   ...: A = data1[:,0].copy()\n   ...: B = data1[:,1].copy()\n   ...: C = data1[:,2].copy()\n   ...: D = data1[:,3].copy()\n   ...: \n   ...: gpu_out1,gpu_out2 = gpu_app_v1(A, B, C, D, batchSize, minimumLimit)\n   ...: cpu_out1,cpu_out2 = np.array(org_app(data1, batchSize, minimumLimit)).T\n   ...: print(np.allclose(gpu_out1, cpu_out1))\n   ...: print(np.allclose(gpu_out2, cpu_out2))\n   ...: \nTrue\nFalse\n
\n\n

So, there's some differences between CPU and GPU countings. Let's investigate them -

\n\n
In [7]: idx = np.flatnonzero(~np.isclose(gpu_out2, cpu_out2))\n\nIn [8]: idx\nOut[8]: array([12776, 15208, 17620, 18326])\n\nIn [9]: gpu_out2[idx] - cpu_out2[idx]\nOut[9]: array([-1., -1.,  1.,  1.])\n
\n\n

There are four instances of non-matching counts. These are off at max by 1. Upon research, I came across some information on this. Basically, since we are using math intrinsics for max and min computations and those I think are causing the last binary bit in the floating pt representation to be diferent than the CPU counterpart. This is termed as ULP error and has been discused in detail here and here.

\n\n

Finally, puting the issue aside, let's get to the most important bit, the performance -

\n\n
In [10]: %timeit org_app(data1, batchSize, minimumLimit)\n1 loops, best of 3: 2.18 s per loop\n\nIn [11]: %timeit gpu_app_v1(A, B, C, D, batchSize, minimumLimit)\n10 loops, best of 3: 82.5 ms per loop\n\nIn [12]: 2180.0/82.5\nOut[12]: 26.424242424242426\n
\n\n

Let's try with bigger datasets. With sampleSize = 500000, we get -

\n\n
In [14]: %timeit org_app(data1, batchSize, minimumLimit)\n1 loops, best of 3: 23.2 s per loop\n\nIn [15]: %timeit gpu_app_v1(A, B, C, D, batchSize, minimumLimit)\n1 loops, best of 3: 821 ms per loop\n\nIn [16]: 23200.0/821\nOut[16]: 28.25822168087698\n
\n\n

So, the speedup stays constant at around 27.

\n\n

Limitations :

\n\n

1) We are using float32 numbers, as GPUs work best with those. Double precision specially on non-server GPUs aren't popular when it comes to performance and since you are working with such a GPU, I tested with float32.

\n\n

Further improvement :

\n\n

1) We could use faster constant memory to feed in data2a and data2b, rather than use global memory.

\n", "system": "" }, { "instruction": "How to initialise only optimizer variables in Tensorflow?", "input": "", "output": "

Both current answers kinda work by filtering the variable name using the 'Momentum' string. But that is very brittle on two sides:

\n\n
    \n
  1. It could silently (re-)initialize some other variables you don't actually want to reset! Either simply because of a name-clash, or because you have a more complex graph and optimize different parts separately, for example.
  2. \n
  3. It will only work for one specific optimizer, and how do you know the names to look out for for others?
  4. \n
  5. Bonus: an update to tensorflow might silently break your code.
  6. \n
\n\n

Fortunately, tensorflow's abstract Optimizer class has a mechanism for that, these extra optimizer variables are called \"slots\", and you can get all slot names of an optimizer using the get_slot_names() method:

\n\n
opt = tf.train.MomentumOptimizer(...)\nprint(opt.get_slot_names())\n# prints ['momentum']\n
\n\n

And you can get the variable corresponding to the slot for a specific (trainable) variable v using the get_slot(var, slot_name) method:

\n\n
opt.get_slot(some_var, 'momentum')\n
\n\n

Putting all this together, you can create an op that initializes the optimizer's state as follows:

\n\n
var_list = # list of vars to optimize, e.g. \n           # tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)\nopt = tf.train.MomentumOptimizer(0.1, 0.95)\nstep_op = opt.minimize(loss, var_list=var_list)\nreset_opt_op = tf.variables_initializer([opt.get_slot(var, name) for name in opt.get_slot_names() for var in var_list])\n
\n\n

This will really only reset the correct variables, and be robust across optimizers.

\n\n

Except for one unfortunate caveat: AdamOptimizer. That one also keeps a counter for how often it's been called. That means you should really think hard about what you're doing here anyways, but for completeness' sake, you can get its extra states as opt._get_beta_accumulators(). The returned list should be added to the list in the above reset_opt_op line.

\n", "system": "" }, { "instruction": "In TensorFlow,what's the meaning of ":0" in a Variable's name?", "input": "", "output": "

It has to do with representation of tensors in underlying API. A tensor is a value associated with output of some op. In case of variables, there's a Variable op with one output. An op can have more than one output, so those tensors get referenced to as <op>:0, <op>:1 etc. For instance if you use tf.nn.top_k, there are two values created by this op, so you may see TopKV2:0 and TopKV2:1

\n\n
a,b=tf.nn.top_k([1], 1)\nprint a.name # => 'TopKV2:0'\nprint b.name # => 'TopKV2:1'\n
\n\n

How to understand the term `tensor` in TensorFlow?

\n", "system": "" }, { "instruction": "Understanding Tensorflow LSTM Input shape", "input": "", "output": "

The documentation of tf.nn.dynamic_rnn states:

\n\n
\n

inputs: The RNN inputs. If time_major == False (default), this must be a Tensor of shape: [batch_size, max_time, ...], or a nested tuple of such elements.

\n
\n\n

In your case, this means that the input should have a shape of [batch_size, 10, 2]. Instead of training on all 4000 sequences at once, you'd use only batch_size many of them in each training iteration. Something like the following should work (added reshape for clarity):

\n\n
batch_size = 32\n# batch_size sequences of length 10 with 2 values for each timestep\ninput = get_batch(X, batch_size).reshape([batch_size, 10, 2])\n# Create LSTM cell with state size 256. Could also use GRUCell, ...\n# Note: state_is_tuple=False is deprecated;\n# the option might be completely removed in the future\ncell = tf.nn.rnn_cell.LSTMCell(256, state_is_tuple=True)\noutputs, state = tf.nn.dynamic_rnn(cell,\n                                   input,\n                                   sequence_length=[10]*batch_size,\n                                   dtype=tf.float32)\n
\n\n

From the documentation, outputs will be of shape [batch_size, 10, 256], i.e. one 256-output for each timestep. state will be a tuple of shapes [batch_size, 256]. You could predict your final value, one for each sequence, from that:

\n\n
predictions = tf.contrib.layers.fully_connected(state.h,\n                                                num_outputs=1,\n                                                activation_fn=None)\nloss = get_loss(get_batch(Y).reshape([batch_size, 1]), predictions)\n
\n\n

The number 256 in the shapes of outputs and state is determined by cell.output_size resp. cell.state_size. When creating the LSTMCell like above, these are the same. Also see the LSTMCell documentation.

\n", "system": "" }, { "instruction": "Tensorflow: Cuda compute capability 3.0. The minimum required Cuda capability is 3.5", "input": "", "output": "

I have installed Tensorflow revision 1.8. It recommends CUDA 9.0. I am using a GTX 650M card which has CUDA compute capability 3.0 and now works like a charm. OS is ubuntu 18.04. Below are detailed steps:

\n\n

Installing dependencies

\n\n

I have included ffmpeg and some related packages for my opencv 3.4 compilation, if not required do not install\nRun the below commands:

\n\n
sudo apt-get update \nsudo apt-get dist-upgrade -y\nsudo apt-get autoremove -y\nsudo apt-get upgrade\nsudo add-apt-repository ppa:jonathonf/ffmpeg-3 -y\nsudo apt-get update\nsudo apt-get install build-essential -y\nsudo apt-get install ffmpeg -y\nsudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev -y\nsudo apt-get install python-dev libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev -y\nsudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev -y\nsudo apt-get install libxvidcore-dev libx264-dev -y\nsudo apt-get install unzip qtbase5-dev python-dev python3-dev python-numpy python3-numpy -y\nsudo apt-get install libopencv-dev libgtk-3-dev libdc1394-22 libdc1394-22-dev libjpeg-dev libpng12-dev libtiff5-dev >libjasper-dev -y\nsudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libxine2-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev -y\nsudo apt-get install libv4l-dev libtbb-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev -y\nsudo apt-get install libvorbis-dev libxvidcore-dev v4l-utils vtk6 -y\nsudo apt-get install liblapacke-dev libopenblas-dev libgdal-dev checkinstall -y\nsudo apt-get install libgtk-3-dev -y\nsudo apt-get install libatlas-base-dev gfortran -y\nsudo apt-get install qt-sdk -y\nsudo apt-get install python2.7-dev python3.5-dev python-tk -y\nsudo apt-get install cython libgflags-dev -y\nsudo apt-get install tesseract-ocr -y\nsudo apt-get install tesseract-ocr-eng -y \nsudo apt-get install tesseract-ocr-ell -y\nsudo apt-get install gstreamer1.0-python3-plugin-loader -y\nsudo apt-get install libdc1394-22-dev -y\nsudo apt-get install openjdk-8-jdk\nsudo apt-get install pkg-config zip g++-6 gcc-6 zlib1g-dev unzip  git\nsudo wget https://bootstrap.pypa.io/get-pip.py\nsudo python get-pip.py\nsudo pip install -U pip\nsudo pip install -U numpy\nsudo pip install -U pandas\nsudo pip install -U wheel\nsudo pip install -U six\n
\n\n

Installing the nvidia driver

\n\n

Run the below commands:

\n\n
sudo add-apt-repository ppa:graphics-drivers/ppa\nsudo apt-get update\nsudo apt-get install nvidia-390 -y\n
\n\n

Reboot and run the below command and it should give you details as described in the image below:\n\"enter

\n\n

gcc-6 and g++-6 checks.

\n\n

gcc-6 and g++-6 is required for CUDA 9.0, run the below commands:

\n\n
cd /usr/bin \nsudo rm -rf gcc gcc-ar gcc-nm gcc-ranlib g++\nsudo ln -s gcc-6 gcc\nsudo ln -s gcc-ar-6 gcc-ar\nsudo ln -s gcc-nm-6 gcc-nm\nsudo ln -s gcc-ranlib-6 gcc-ranlib\nsudo ln -s g++-6 g++\n
\n\n

Installing CUDA 9.0

\n\n

Go to https://developer.nvidia.com/cuda-90-download-archive. Select options: Linux->x86_64->Ubuntu->17.04->deb(local).\nDownload the main file and the two patches.\nRun below commands:

\n\n
sudo dpkg -i cuda-repo-ubuntu1704-9-0-local_9.0.176-1_amd64.deb\nsudo apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub\nsudo apt-get update\nsudo apt-get install cuda\n
\n\n

Navigate to the first patch on your PC and double click it, it will automatically execute, follow same for second patch.

\n\n

Add below to lines to your ~/.bashrc file and give it a reboot:

\n\n
export PATH=/usr/local/cuda-9.0/bin${PATH:+:$PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}\n
\n\n

Installing cudnn 7.1.4 for CUDA 9.0

\n\n

Download the tar file from https://developer.nvidia.com/cudnn and extract it to your Downloads folder\nDownload requires a nvidia developed login, free sign-up\nRun the below commands:

\n\n
cd ~/Downloads/cudnn-9.0-linux-x64-v7.1/cuda\nsudo cp include/* /usr/local/cuda/include/\nsudo cp lib64/libcudnn.so.7.1.4 lib64/libcudnn_static.a /usr/local/cuda/lib64/\ncd /usr/lib/x86_64-linux-gnu\nsudo ln -s libcudnn.so.7.1.4 libcudnn.so.7\nsudo ln -s libcudnn.so.7 libcudnn.so\n
\n\n

Installing NCCL 2.2.12 for CUDA 9.0

\n\n

Download the tar file from https://developer.nvidia.com/nccl and extract it to your Downloads folder\nDownload requires a nvidia developed login, free sign-up\nRun the below commands:

\n\n
sudo mkdir -p /usr/local/cuda/nccl/lib /usr/local/cuda/nccl/include\ncd ~/Downloads/nccl-repo-ubuntu1604-2.2.12-ga-cuda9.0_1-1_amd64/\nsudo cp *.txt /usr/local/cuda/nccl\nsudo cp include/*.h /usr/include/\nsudo cp lib/libnccl.so.2.1.15 lib/libnccl_static.a /usr/lib/x86_64-linux-gnu/\nsudo ln -s /usr/include/nccl.h /usr/local/cuda/nccl/include/nccl.h\ncd /usr/lib/x86_64-linux-gnu\nsudo ln -s libnccl.so.2.1.15 libnccl.so.2\nsudo ln -s libnccl.so.2 libnccl.so\nfor i in libnccl*; do sudo ln -s /usr/lib/x86_64-linux-gnu/$i /usr/local/cuda/nccl/lib/$i; done\n
\n\n

Install Bazel (the recomended manual installation of bazel worked, for reference: https://docs.bazel.build/versions/master/install-ubuntu.html#install-with-installer-ubuntu)

\n\n

Download \"bazel-0.13.1-installer-darwin-x86_64.sh\" from https://github.com/bazelbuild/bazel/releases\nRun the below commands:

\n\n
chmod +x bazel-0.13.1-installer-darwin-x86_64.sh\n./bazel-0.13.1-installer-darwin-x86_64.sh --user\nexport PATH=\"$PATH:$HOME/bin\"\n
\n\n

Compiling Tensorflow

\n\n

We will compile with CUDA, with XLA JIT (oh yeah) and jemalloc as malloc support. So we enter yes for these things.\nRun the below command and answer to the queries as described for running configuration

\n\n
git clone https://github.com/tensorflow/tensorflow \ngit checkout r1.8\n./configure\nYou have bazel 0.13.0 installed.\nPlease specify the location of python. [Default is /usr/bin/python]:\nPlease input the desired Python library path to use.  Default is [/usr/local/lib/python2.7/dist-packages]\nDo you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: y\njemalloc as malloc support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: n\nNo Google Cloud Platform support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with Hadoop File System support? [Y/n]: n\nNo Hadoop File System support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: n\nNo Amazon S3 File System support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: n\nNo Apache Kafka Platform support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with XLA JIT support? [y/N]: y\nXLA JIT support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with GDR support? [y/N]: n\nNo GDR support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with VERBS support? [y/N]: n\nNo VERBS support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n\nNo OpenCL SYCL support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with CUDA support? [y/N]: y\nCUDA support will be enabled for TensorFlow.\nPlease specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 9.0]:\nPlease specify the location where CUDA 9.1 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:\nPlease specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.1.4\nPlease specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:\nDo you wish to build TensorFlow with TensorRT support? [y/N]: n\nNo TensorRT support will be enabled for TensorFlow.\nPlease specify the NCCL version you want to use. [Leave empty to default to NCCL 1.3]: 2.2.12\nPlease specify the location where NCCL 2 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:/usr/local/cuda/nccl\nPlease specify a list of comma-separated Cuda compute capabilities you want to build with.\nYou can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.\nPlease note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.0]\nDo you want to use clang as CUDA compiler? [y/N]: n\nnvcc will be used as CUDA compiler.\nPlease specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/x86_64-linux-gnu-gcc-7]: /usr/bin/gcc-6\nDo you wish to build TensorFlow with MPI support? [y/N]: n\nNo MPI support will be enabled for TensorFlow.\nPlease specify optimization flags to use during compilation when bazel option \"--config=opt\" is specified [Default is -march=native]:\nWould you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n\nNot configuring the WORKSPACE for Android builds.\nPreconfigured Bazel build configs. You can use any of the below by adding \"--config=<>\" to your build command. See tools/bazel.rc for more details.\n --config=mkl          # Build with MKL support.\n\n --config=monolithic   # Config for mostly static monolithic build.\n\nConfiguration finished\n
\n\n

Now to compile tensorflow, run below command, this is super RAM consuming and will take time. You can remove \"--local_resources 2048,.5,1.0\" from below line if you have a lot of RAM or this will work on 2 GB of RAM

\n\n
bazel build --config=opt --config=cuda --local_resources 2048,.5,1.0 //tensorflow/tools/pip_package:build_pip_package\n
\n\n

Once the compilation is completed you will have thing appear as per the image below confirming it was a success\n\"enter

\n\n

Build the wheel file, run below:

\n\n
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg\n
\n\n

Install the generated wheel file using pip

\n\n
sudo pip install /tmp/tensorflow_pkg/tensorflow*.whl\n
\n\n

To explore on the devices now you can run tensorflow, below image is the showcase on ipython terminal

\n\n

\"enter

\n", "system": "" }, { "instruction": "What is the purpose of the tf.contrib module in Tensorflow?", "input": "", "output": "

In general, tf.contrib contains contributed code. It is meant to contain features and contributions that eventually should get merged into core TensorFlow, but whose interfaces may still change, or which require some testing to see whether they can find broader acceptance.

\n\n

The code in tf.contrib isn't supported by the Tensorflow team. It is included in the hope that it is helpful, but it might change or be removed at any time; there are no guarantees.

\n\n

The source of tf.contrib.layers.sparse_column_with_hash_bucket can be found at

\n\n

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/feature_column.py#L365

\n", "system": "" }, { "instruction": "In what order should we tune hyperparameters in Neural Networks?", "input": "", "output": "

My general order is:

\n\n
    \n
  1. Batch size, as it will largely affect the training time of future experiments.
  2. \n
  3. Architecture of the network:\n\n
  4. \n
  5. Rest (dropout, L2 reg, etc.)
  6. \n
\n\n

Dependencies:

\n\n

I'd assume that the optimal values of

\n\n\n\n

strongly depend on each other. I am not an expert on that field though.

\n\n

As for your hyperparameters:

\n\n\n", "system": "" }, { "instruction": "TensorFlow: Unpooling", "input": "", "output": "

I don't think there is an official unpooling layer yet which is frustrating because you have to use image resize (bilinear interpolation or nearest neighbor) which is like an average unpooling operation and it's reaaaly slow. Look at the tf api in the section 'image' and you will find it.

\n\n

Tensorflow has a maxpooling_with_argmax thing where you get you maxpooled output as well as the activation map which is nice as you could use it in an unpooling layer to preserve the 'lost' spacial information but it seems as there isn't such an unpooling operation that does it. I guess that they are planning to add it ... soon.

\n\n

Edit: I found some guy on google discuss a week ago who seems to have implemented something like this but I personally haven't tried it yet.\nhttps://github.com/ppwwyyxx/tensorpack/blob/master/tensorpack/models/pool.py#L66

\n", "system": "" }, { "instruction": "Avoid tensorflow print on standard error", "input": "", "output": "

This was recently fixed, and should be available if you upgrade to TensorFlow 0.12 or later.

\n\n

To disable all logging output from TensorFlow, set the following environment variable before launching Python:

\n\n
$ export TF_CPP_MIN_LOG_LEVEL=3\n$ python ...\n
\n\n

You can also adjust the verbosity by changing the value of TF_CPP_MIN_LOG_LEVEL:

\n\n\n", "system": "" }, { "instruction": "Tensorflow multiple sessions with multiple GPUs", "input": "", "output": "

TensorFlow will attempt to use (an equal fraction of the memory of) all GPU devices that are visible to it. If you want to run different sessions on different GPUs, you should do the following.

\n\n
    \n
  1. Run each session in a different Python process.
  2. \n
  3. Start each process with a different value for the CUDA_VISIBLE_DEVICES environment variable. For example, if your script is called my_script.py and you have 4 GPUs, you could run the following:

    \n\n
    $ CUDA_VISIBLE_DEVICES=0 python my_script.py  # Uses GPU 0.\n$ CUDA_VISIBLE_DEVICES=1 python my_script.py  # Uses GPU 1.\n$ CUDA_VISIBLE_DEVICES=2,3 python my_script.py  # Uses GPUs 2 and 3.\n
    \n\n

    Note the GPU devices in TensorFlow will still be numbered from zero (i.e. \"/gpu:0\" etc.), but they will correspond to the devices that you have made visible with CUDA_VISIBLE_DEVICES.

  4. \n
\n", "system": "" }, { "instruction": "How to deal with batches with variable-length sequences in TensorFlow?", "input": "", "output": "

You can use the ideas of bucketing and padding which are described in:

\n\n

    Sequence-to-Sequence Models

\n\n

Also, the rnn function which creates RNN network accepts parameter sequence_length.

\n\n

As an example, you can create buckets of sentences of the same size, pad them with the necessary amount of zeros, or placeholders which stand for zero word and afterwards feed them along with seq_length = len(zero_words).

\n\n
seq_length = tf.placeholder(tf.int32)\noutputs, states = rnn.rnn(cell, inputs, initial_state=initial_state, sequence_length=seq_length)\n\nsess = tf.Session()\nfeed = {\n    seq_length: 20,\n    #other feeds\n}\nsess.run(outputs, feed_dict=feed)\n
\n\n

Take a look at this reddit thread as well:

\n\n

   Tensorflow basic RNN example with 'variable length' sequences

\n", "system": "" }, { "instruction": "How can I implement a custom RNN (specifically an ESN) in Tensorflow?", "input": "", "output": "

To give a quick summary:

\n\n

Look in the TensorFlow source code under python/ops/rnn_cell.py too see how to subclass RNNCell. It's usually like this:

\n\n
class MyRNNCell(RNNCell):\n  def __init__(...):\n\n  @property\n  def output_size(self):\n  ...\n\n  @property\n  def state_size(self):\n  ...\n\n  def __call__(self, input_, state, name=None):\n     ... your per-step iteration here ...\n
\n", "system": "" }, { "instruction": "Issue feeding a list into feed_dict in TensorFlow", "input": "", "output": "

There are two issues that are causing problems here:

\n\n

The first issue is that the Session.run() call only accepts a small number of types as the keys of the feed_dict. In particular, lists of tensors are not supported as keys, so you have to put each tensor as a separate key.* One convenient way to do this is using a dictionary comprehension:

\n\n
inputs = [tf.placeholder(...), ...]\ndata = [np.array(...), ...]\nsess.run(y, feed_dict={i: d for i, d in zip(inputs, data)})\n
\n\n

The second issue is that the 10 * [tf.placeholder(...)] syntax in Python creates a list with ten elements, where each element is the same tensor object (i.e. has the same name property, the same id property, and is reference-identical if you compare two elements from the list using inputs[i] is inputs[j]). This explains why, when you tried to create a dictionary using the list elements as keys, you ended up with a dictionary with a single element - because all of the list elements were identical.

\n\n

To create 10 different placeholder tensors, as you intended, you should instead do the following:

\n\n
inputs = [tf.placeholder(tf.float32, shape=(batch_size, input_size))\n          for _ in xrange(10)]\n
\n\n

If you print the elements of this list, you'll see that each element is a tensor with a different name.

\n\n
\n\n

EDIT: * You can now pass tuples as the keys of a feed_dict, because these may be used as dictionary keys.

\n", "system": "" }, { "instruction": "How to get the accuracy per epoch or step for the huggingface.transformers Trainer?", "input": "", "output": "

You can load the accuracy metric and make it work with your compute_metrics function. As an example, it would be like:

\n
from datasets import load_metric\nmetric = load_metric('accuracy')\n\ndef compute_metrics(eval_pred):\n    predictions, labels = eval_pred\n    predictions = np.argmax(predictions, axis=1)\n    return metric.compute(predictions=predictions, references=labels)\n
\n

This example of compute_metrics function is based on the Hugging Face's text classification tutorial. It worked in my tests.

\n", "system": "" }, { "instruction": "Tf 2.0 : RuntimeError: GradientTape.gradient can only be called once on non-persistent tapes", "input": "", "output": "

From the documentation of GradientTape:

\n
\n

By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a persistent gradient tape. This allows multiple calls to the gradient() method as resources are released when the tape object is garbage collected.

\n
\n

A persistent gradient can be created with with tf.GradientTape(persistent=True) as tape and can/should be manually deleted with del tape (credits for this @zwep, @Crispy13).

\n", "system": "" }, { "instruction": "How to install latest cuDNN to conda?", "input": "", "output": "
    \n
  1. conda update --force conda
  2. \n
  3. conda update conda
  4. \n
  5. conda install -c anaconda cudnn
  6. \n
  7. conda list cudnn
  8. \n
\n", "system": "" }, { "instruction": "Does bias in the convolutional layer really make a difference to the test accuracy?", "input": "", "output": "
\n

Biases are tuned alongside weights by learning algorithms such as\ngradient descent. biases differ from weights is that they are\nindependent of the output from previous layers. Conceptually bias is\ncaused by input from a neuron with a fixed activation of 1, and so is\nupdated by subtracting the just the product of the delta value and\nlearning rate.

\n
\n

In a large model, removing the bias inputs makes very little difference because each node can make a bias node out of the average activation of all of its inputs, which by the law of large numbers will be roughly normal. At the first layer, the ability for this to happens depends on your input distribution. On a small network, of course you need a bias input, but on a large network, removing it makes almost no difference.

\n

Although in a large network it has no difference, it still depends on network architecture. For instance in LSTM:

\n
\n

Most applications of LSTMs simply initialize the LSTMs with small\nrandom weights which works well on many problems. But this\ninitialization effectively sets the forget gate to 0.5. This\nintroduces a vanishing gradient with a factor of 0.5 per timestep,\nwhich can cause problems whenever the long term dependencies are\nparticularly severe. This problem is addressed by simply initializing the\nforget gates bias to a large value such as 1 or 2. By doing so, the\nforget gate will be initialized to a value that is close to 1,\nenabling gradient flow.

\n
\n

See also:

\n\n", "system": "" }, { "instruction": "Reproducible results in Tensorflow with tf.set_random_seed", "input": "", "output": "

In tensorflow, a random operation relies on two different seeds: a global seed, set by tf.set_random_seed, and an operation seed, provided as an argument to the operation. You will find more details on how they relate in the docs.

\n\n

You have a different seed for each random op because each random op maintains its own internal state for pseudo-random number generation. The reason for having each random generator maintaining its own state is to be robust to change: if they shared the same state, then adding a new random generator somewhere in your graph would change the values produced by all the other generators, defeating the purpose of using a seed.

\n\n

Now, why do we have this dual system of global and per-op seeds? Well, actually the global seed is not necessary. It is there for convenience: It allows to set all random op seeds to a different and deterministic (if unknown) value at once, without having to go exhaustively through all of them.

\n\n

Now when a global seed is set but not the op seed, according to the docs,

\n\n
\n

The system deterministically picks an operation seed in conjunction with the graph-level seed so that it gets a unique random sequence.

\n
\n\n

To be more precise, the seed that is provided is the id of the last operation that has been created in the current graph. Consequently, globally-seeded random operation are extremely sensitive to change in the graph, in particular to those created before itself.

\n\n

For example,

\n\n
import tensorflow as tf\ntf.set_random_seed(1234)\ngenerate = tf.random_uniform(())\nwith tf.Session() as sess:\n  print(generate.eval())\n  # 0.96046877\n
\n\n

Now if we create a node before, the result changes:

\n\n
import tensorflow as tf\ntf.set_random_seed(1234)\ntf.zeros(()) # new op added before \ngenerate = tf.random_uniform(())\nwith tf.Session() as sess:\n  print(generate.eval())\n  # 0.29252338\n
\n\n

If a node is create after however, it does not affect the op seed:

\n\n
import tensorflow as tf\ntf.set_random_seed(1234)\ngenerate = tf.random_uniform(())\ntf.zeros(()) # new op added after\nwith tf.Session() as sess:\n  print(generate.eval())\n  # 0.96046877\n
\n\n

Obviously, as in your case, if you generate several operations, they will have different seeds:

\n\n
import tensorflow as tf\ntf.set_random_seed(1234)\ngen1 = tf.random_uniform(())\ngen2 = tf.random_uniform(())\nwith tf.Session() as sess:\n  print(gen1.eval())\n  print(gen2.eval())\n  # 0.96046877\n  # 0.85591054\n
\n\n

As a curiosity, and to validate the fact that seeds are simply the last used id in the graph, you could align the seed of gen2 to gen1 with

\n\n
import tensorflow as tf\ntf.set_random_seed(1234)\ngen1 = tf.random_uniform(())\n# 4 operations seems to be created after seed has been picked\nseed = tf.get_default_graph()._last_id - 4\ngen2 = tf.random_uniform((), seed=seed)\nwith tf.Session() as sess:\n  print(gen1.eval())\n  print(gen2.eval())\n  # 0.96046877\n  # 0.96046877\n
\n\n

Obviously though, this should not pass code review.

\n", "system": "" }, { "instruction": "RuntimeError: main thread is not in main loop with Matplotlib and Flask", "input": "", "output": "

I was on the same situation, Flask with Matplotlib combo.\nWhat worked for me is to specify Agg as Matplotlib backend.

\n\n
import matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\n\n# Your code here\n
\n\n

You can refer to Matplotlib documentation (Matplotlib in a web application server) for the details.

\n", "system": "" }, { "instruction": "Illegal instruction(core dumped) tensorflow", "input": "", "output": "

I had the same problem and had to downgrade tensorflow to 1.5.0:

\n\n
pip uninstall tensorflow\npip install tensorflow==1.5.0\n
\n\n

Edit: As @Tobsta points out in the comments, the other option is to compile the binaries from source. The precompiled binaries of versions >1.5 use AVX instructions that are not supported by older CPUs

\n", "system": "" }, { "instruction": "What is the difference between tensors and sparse tensors?", "input": "", "output": "

Matthew did a great job but I would love to give an example to shed more light on Sparse tensors with a example.

\n\n

If a tensor has lots of values that are zero, it can be called sparse.

\n\n

Lets consider a sparse 1-D Tensor

\n\n
[0, 7, 0, 0, 8, 0, 0, 0, 0]\n
\n\n

A sparse representation of the same tensor will focus only on the non-zero values

\n\n
values = [7,8]\n
\n\n

We also have to remember where those values occurs, by their indices

\n\n
indices = [1,4]\n
\n\n

The one-dimensional indices form will work with some methods, for this one-dimensional example, but in general indices have multiple dimensions, so it will be more consistent (and work everywhere) to represent indices like this:

\n\n
indices = [[1], [4]]\n
\n\n

With values and indices, we don't have quite enough information yet. How many zeros are there? We represent dense shape of a tensor.

\n\n
 dense_shape = [9]\n
\n\n

These three things together, values, indices, and dense_shape, are a sparse representation of the tensor

\n\n

In tensorflow 2.0 it can be implemented as

\n\n
x = tf.SparseTensor(values=[7,8],indices=[[1],[4]],dense_shape=[9])\nx\n#o/p: <tensorflow.python.framework.sparse_tensor.SparseTensor at 0x7ff04a58c4a8>\n\nprint(x.values)\nprint(x.dense_shape)\nprint(x.indices)\n#o/p: \ntf.Tensor([7 8], shape=(2,), dtype=int32)\ntf.Tensor([9], shape=(1,), dtype=int64)\ntf.Tensor(\n[[1]\n [4]], shape=(2, 1), dtype=int64)\n
\n\n

EDITED to correct indices as pointed out in the comments.

\n", "system": "" }, { "instruction": "Getting around tf.argmax which is not differentiable", "input": "", "output": "

As aidan suggested, it's just a softargmax stretched to the limits by beta. We can use tf.nn.softmax to get around the numerical issues:

\n\n
def softargmax(x, beta=1e10):\n  x = tf.convert_to_tensor(x)\n  x_range = tf.range(x.shape.as_list()[-1], dtype=x.dtype)\n  return tf.reduce_sum(tf.nn.softmax(x*beta) * x_range, axis=-1)\n
\n", "system": "" }, { "instruction": "replicate a row tensor using tf.tile?", "input": "", "output": "

Take the following, vec is a vector, multiply is your m, the number of times to repeat the vector. tf.tile is performed on the vector and then using tf.reshape it is reshaped into the desired structure.

\n\n
import tensorflow as tf\n\nvec = tf.constant([1, 2, 3, 4])\nmultiply = tf.constant([3])\n\nmatrix = tf.reshape(tf.tile(vec, multiply), [ multiply[0], tf.shape(vec)[0]])\nwith tf.Session() as sess:\n    print(sess.run([matrix]))\n
\n\n

This results in:

\n\n
[array([[1, 2, 3, 4],\n       [1, 2, 3, 4],\n       [1, 2, 3, 4]], dtype=int32)]\n
\n", "system": "" }, { "instruction": "tensorflow einsum vs. matmul vs. tensordot", "input": "", "output": "

Both tf.tensordot() and tf.einsum() are syntactic sugar that wrap one or more invocations of tf.matmul() (although in some special cases tf.einsum() can reduce to the simpler elementwise tf.multiply()).

\n

In the limit, I'd expect all three functions to have equivalent performance for the same computation. However, for smaller matrices, it may be more efficient to use tf.matmul() directly, because it would yield a simpler TensorFlow graph with fewer operations, and hence the pre-operation invocation costs will be lower.

\n", "system": "" }, { "instruction": "How to get the type of a Tensor?", "input": "", "output": "

You can use get_shape() to get the shape of a tensorflow variable.

\n\n
>>> x = tf.Variable(tf.random_normal([256, 100]))\n>>> x.get_shape()\n(256, 100)\n
\n\n

You can use dtype property to get the type of a tensorflow variable.

\n\n
>>> x = tf.Variable(tf.random_normal([256, 100]))\n>>> x.dtype\n<dtype: 'float32_ref'>\n
\n\n

You can use as_numpy_dtype property of dtype to convert from tf.dtype to numpy dtype.

\n\n
>>> x = tf.Variable(tf.random_normal([256, 100]))\n>>> x.dtype.as_numpy_dtype\n<class 'numpy.float32'>\n
\n", "system": "" }, { "instruction": "How is tf.summary.tensor_summary meant to be used?", "input": "", "output": "

I cannot get it to work either. It seems like that feature is still under development. See this video from the TensorFlow Dev Summit that states that the tensor_summary is still under development (starting at 9:17): https://youtu.be/eBbEDRsCmv4?t=9m17s. It will probably be better defined and examples should be provided in the future.

\n", "system": "" }, { "instruction": "Split tensor into training and test sets", "input": "", "output": "

As elham mentioned, you can use scikit-learn to do this easily. scikit-learn is an open source library for machine learning. There are tons of tools for data preparation including the model_selection module, which handles comparing, validating and choosing parameters.

\n\n

The model_selection.train_test_split() method is specifically designed to split your data into train and test sets randomly and by percentage.

\n\n
X_train, X_test, y_train, y_test = train_test_split(features,\n                                                    labels,\n                                                    test_size=0.33,\n                                                    random_state=42)\n
\n\n

test_size is the percentage to reserve for testing and random_state is to seed the random sampling.

\n\n

I typically use this to provide train and validation data sets, and keep true test data separately. You could just run train_test_split twice to do this as well. I.e. split the data into (Train + Validation) and Test, then split Train + Validation into two separate tensors.

\n", "system": "" }, { "instruction": "Understanding `tf.nn.nce_loss()` in tensorflow", "input": "", "output": "

Let's look at the relative code in word2vec example (examples/tutorials/word2vec).

\n
embeddings = tf.Variable(\n    tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))\nembed = tf.nn.embedding_lookup(embeddings, train_inputs)\n
\n

These two lines create embedding representations. embeddings is a matrix where each row represents a word vector. embedding_lookup is a quick way to get vectors corresponding to train_inputs. In word2vec example, train_inputs consists of some int32 number, representing the id of target words. Basically, it can be placed by hidden layer feature.

\n
# Construct the variables for the NCE loss\nnce_weights = tf.Variable(\n    tf.truncated_normal([vocabulary_size, embedding_size],\n                        stddev=1.0 / math.sqrt(embedding_size)))\nnce_biases = tf.Variable(tf.zeros([vocabulary_size]))\n
\n

These two lines create parameters. They will be updated by optimizer during training. We can use tf.matmul(embed, tf.transpose(nce_weights)) + nce_biases to get final output score. In other words, last inner-product layer in classification can be replaced by it.

\n
loss = tf.reduce_mean(\n      tf.nn.nce_loss(weights=nce_weights,     # [vocab_size, embed_size]\n                   biases=nce_biases,         # [vocab_size]\n                   labels=train_labels,       # [bs, 1]\n                   inputs=embed,              # [bs, embed_size]\n                   num_sampled=num_sampled, \n                   num_classes=vocabulary_size))\n
\n

These lines create nce loss, @garej has given a very good explanation. num_sampled refers to the number of negative sampling in nce algorithm.

\n
\n

To illustrate the usage of nce, we can apply it in mnist example (examples/tutorials/mnist/mnist_deep.py) with following 2 steps:

\n

1. Replace embed with hidden layer output. The dimension of hidden layer is 1024 and num_output is 10. Minimum value of num_sampled is 1. Remember to remove the last inner-product layer in deepnn().

\n
y_conv, keep_prob = deepnn(x)                                            \n                                                                           \nnum_sampled = 1                                                          \nvocabulary_size = 10                                                     \nembedding_size = 1024                                                    \nwith tf.device('/cpu:0'):                                                \n  embed = y_conv                                                         \n  # Construct the variables for the NCE loss                             \n  nce_weights = tf.Variable(                                             \n      tf.truncated_normal([vocabulary_size, embedding_size],             \n                          stddev=1.0 / math.sqrt(embedding_size)))       \n  nce_biases = tf.Variable(tf.zeros([vocabulary_size])) \n
\n

2. Create loss and compute output. After computing the output, we can use it to calculate accuracy. Note that the label here is not one-hot vector as used in softmax. Labels are the original label of training samples.

\n
loss = tf.reduce_mean(                                   \n    tf.nn.nce_loss(weights=nce_weights,                           \n                   biases=nce_biases,                             \n                   labels=y_idx,                                  \n                   inputs=embed,                                  \n                   num_sampled=num_sampled,                       \n                   num_classes=vocabulary_size))                  \n                                                                    \noutput = tf.matmul(y_conv, tf.transpose(nce_weights)) + nce_biases\ncorrect_prediction = tf.equal(tf.argmax(output, 1), tf.argmax(y_, 1))\n
\n

When we set num_sampled=1, the val accuracy will end at around 98.8%. And if we set num_sampled=9, we can get almost the same val accuracy as trained by softmax. But note that nce is different from softmax.

\n

Full code of training mnist by nce can be found here. Hope it is helpful.

\n", "system": "" }, { "instruction": "How to download previous version of tensorflow?", "input": "", "output": "

It works for me, since I have 1.6

\n
pip install tensorflow==1.5\n
\n", "system": "" }, { "instruction": "How-to run TensorFlow on multiple core and threads", "input": "", "output": "

According to Tensorflow:

\n\n
\n

The two configurations listed below are used to optimize CPU performance by \n adjusting the thread pools.

\n \n \n \n

These configurations are set via the tf.ConfigProto and passed to\n tf.Session in the config attribute as shown in the snippet below. For both\n configuration options, if they are unset or set to 0, will default to the\n number of logical CPU cores. Testing has shown that the default is effective\n for systems ranging from one CPU with 4 cores to multiple CPUs with 70+\n combined logical cores. A common alternative optimization is to set the number\n of threads in both pools equal to the number of physical cores rather than\n logical cores

\n\n
config = tf.ConfigProto()\nconfig.intra_op_parallelism_threads = 44\nconfig.inter_op_parallelism_threads = 44\ntf.session(config=config)\n
\n
\n\n
\n\n
\n

In versions of TensorFlow before 1.2, It is recommended using multi-threaded,\n queue-based input pipelines for performance. Beginning with TensorFlow 1.4,\n however, It is recommended using the tf.data module instead.

\n
\n\n
\n\n

Yes, in Linux, you can check your CPU usage with top and press 1 to show the usage per CPU. note: The percentage depends on the Irix/Solaris mode.

\n", "system": "" }, { "instruction": "For what reason Convolution 1x1 is used in deep neural networks?", "input": "", "output": "

You can think about 1x1xD convolution as a dimensionality reduction technique when it's placed somewhere into a network.

\n\n

If you have an input volume of 100x100x512 and you convolve it with a set of D filters each one with size 1x1x512 you reduce the number of features from 512 to D.\nThe output volume is, therefore, 100x100xD.

\n\n

As you can see this (1x1x512)xD convolution is mathematically equivalent to a fully connected layer. The main difference is that whilst FC layer requires the input to have a fixed size, the convolutional layer can accept in input every volume with spatial extent greater or equal than 100x100.

\n\n

A 1x1xD convolution can substitute any fully connected layer because of this equivalence.

\n\n

\nIn addition, 1x1xD convolutions not only reduce the features in input to the next layer, but also introduces new parameters and new non-linearity into the network that will help to increase model accuracy.

\n\n

When the 1x1xD convolution is placed at the end of a classification network, it acts exactly as a FC layer, but instead of thinking about it as a dimensionality reduction technique it's more intuitive to think about it as a layer that will output a tensor with shape WxHxnum_classes.

\n\n

The spatial extent of the output tensor (identified by W and H) is dynamic and is determined by the locations of the input image that the network analyzed.

\n\n

If the network has been defined with an input of 200x200x3 and we give it in input an image with this size, the output will be a map with W = H = 1 and depth = num_classes.\nBut, if the input image have a spatial extent greater than 200x200 than the convolutional network will analyze different locations of the input image (just like a standard convolution does) and will produce a tensor with W > 1 and H > 1.\nThis is not possibile with a FC layer that constrains the network to accept fixed size input and produce fixed size output.

\n", "system": "" }, { "instruction": "What does it mean to unroll a RNN dynamically?", "input": "", "output": "

From the documentation I understand that what they are saying is that the parameter sequence_length in the rnn method affects the performance because when set, it will perform dynamic computation and it will stop before.

\n

For example, if the rnn largest input sequence has a length of 50, if the other sequences are shorter it will be better to set the sequence_length for each sequence, so that the computation for each sequence will stop when the sequence ends and won't compute the padding zeros until reaching 50 timesteps. However, if sequence_length is not provided, it will consider each sequence to have the same length, so it will treat the zeros used for padding as normal items in the sequence.

\n

This does not mean that dynamic_rnn is less performant, the documentation says that the parameter sequence_length will not affect the performance because the computation is already dynamic.

\n

Also according to this post about RNNs in Tensorflow:

\n
\n

Internally, tf.nn.rnn creates an unrolled graph for a fixed RNN length. That means, if you call tf.nn.rnn with inputs having 200 time steps you are creating a static graph with 200 RNN steps. First, graph creation is slow. Second, you\u2019re unable to pass in longer sequences (> 200) than you\u2019ve originally specified.

\n

tf.nn.dynamic_rnn solves this. It uses a tf.While loop to dynamically construct the graph when it is executed. That means graph creation is faster and you can feed batches of variable size. What about performance? You may think the static rnn is faster than its dynamic counterpart because it pre-builds the graph. In my experience that\u2019s not the case.

\n

In short, just use tf.nn.dynamic_rnn. There is no benefit to tf.nn.rnn and I wouldn\u2019t be surprised if it was deprecated in the future.

\n
\n

dynamic_rnn is even faster (or equal) so he suggests to use dynamic_rnn anyway.

\n", "system": "" }, { "instruction": "Does TensorFlow view all CPUs of one machine as ONE device?", "input": "", "output": "

By default all CPUs available to the process are aggregated under cpu:0 device.

\n\n

There's answer by mrry here showing how to create logical devices like /cpu:1, /cpu:2

\n\n

There doesn't seem to be working functionality to pin logical devices to specific physical cores or be able to use NUMA nodes in tensorflow.

\n\n

A possible work-around is to use distributed TensorFlow with multiple processes on one machine and use taskset on Linux to pin specific processes to specific cores

\n", "system": "" }, { "instruction": "TensorFlow - introducing both L2 regularization and dropout into the network. Does it makes any sense?", "input": "", "output": "

Ok, after some additional efforts I managed to solve it and introduce both L2 and dropout into my network, code is below. I got slight improvement over the same network without the dropout (with L2 in place). I am still not sure if it really worth the effort to introduce both of them, L2 and dropout but at least it works and slightly improves the results.

\n\n
#ANN with introduced dropout\n#This time we still use the L2 but restrict training dataset\n#to be extremely small\n\n#get just first 500 of examples, so that our ANN can memorize whole dataset\ntrain_dataset_2 = train_dataset[:500, :]\ntrain_labels_2 = train_labels[:500]\n\n#batch size for SGD and beta parameter for L2 loss\nbatch_size = 128\nbeta = 0.001\n\n#that's how many hidden neurons we want\nnum_hidden_neurons = 1024\n\n#building tensorflow graph\ngraph = tf.Graph()\nwith graph.as_default():\n  # Input data. For the training data, we use a placeholder that will be fed\n  # at run time with a training minibatch.\n  tf_train_dataset = tf.placeholder(tf.float32,\n                                    shape=(batch_size, image_size * image_size))\n  tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n  tf_valid_dataset = tf.constant(valid_dataset)\n  tf_test_dataset = tf.constant(test_dataset)\n\n  #now let's build our new hidden layer\n  #its weights\n  hidden_weights = tf.Variable(\n    tf.truncated_normal([image_size * image_size, num_hidden_neurons]))\n  hidden_biases = tf.Variable(tf.zeros([num_hidden_neurons]))\n\n  #now the layer itself. It multiplies data by weights, adds biases\n  #and takes ReLU over result\n  hidden_layer = tf.nn.relu(tf.matmul(tf_train_dataset, hidden_weights) + hidden_biases)\n\n  #add dropout on hidden layer\n  #we pick up the probabylity of switching off the activation\n  #and perform the switch off of the activations\n  keep_prob = tf.placeholder(\"float\")\n  hidden_layer_drop = tf.nn.dropout(hidden_layer, keep_prob)  \n\n  #time to go for output linear layer\n  #out weights connect hidden neurons to output labels\n  #biases are added to output labels  \n  out_weights = tf.Variable(\n    tf.truncated_normal([num_hidden_neurons, num_labels]))  \n\n  out_biases = tf.Variable(tf.zeros([num_labels]))  \n\n  #compute output\n  #notice that upon training we use the switched off activations\n  #i.e. the variaction of hidden_layer with the dropout active\n  out_layer = tf.matmul(hidden_layer_drop,out_weights) + out_biases\n  #our real output is a softmax of prior result\n  #and we also compute its cross-entropy to get our loss\n  #Notice - we introduce our L2 here\n  loss = (tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(\n    out_layer, tf_train_labels) +\n    beta*tf.nn.l2_loss(hidden_weights) +\n    beta*tf.nn.l2_loss(hidden_biases) +\n    beta*tf.nn.l2_loss(out_weights) +\n    beta*tf.nn.l2_loss(out_biases)))\n\n  #now we just minimize this loss to actually train the network\n  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n\n  #nice, now let's calculate the predictions on each dataset for evaluating the\n  #performance so far\n  # Predictions for the training, validation, and test data.\n  train_prediction = tf.nn.softmax(out_layer)\n  valid_relu = tf.nn.relu(  tf.matmul(tf_valid_dataset, hidden_weights) + hidden_biases)\n  valid_prediction = tf.nn.softmax( tf.matmul(valid_relu, out_weights) + out_biases) \n\n  test_relu = tf.nn.relu( tf.matmul( tf_test_dataset, hidden_weights) + hidden_biases)\n  test_prediction = tf.nn.softmax(tf.matmul(test_relu, out_weights) + out_biases)\n\n\n\n#now is the actual training on the ANN we built\n#we will run it for some number of steps and evaluate the progress after \n#every 500 steps\n\n#number of steps we will train our ANN\nnum_steps = 3001\n\n#actual training\nwith tf.Session(graph=graph) as session:\n  tf.initialize_all_variables().run()\n  print(\"Initialized\")\n  for step in range(num_steps):\n    # Pick an offset within the training data, which has been randomized.\n    # Note: we could use better randomization across epochs.\n    offset = (step * batch_size) % (train_labels_2.shape[0] - batch_size)\n    # Generate a minibatch.\n    batch_data = train_dataset_2[offset:(offset + batch_size), :]\n    batch_labels = train_labels_2[offset:(offset + batch_size), :]\n    # Prepare a dictionary telling the session where to feed the minibatch.\n    # The key of the dictionary is the placeholder node of the graph to be fed,\n    # and the value is the numpy array to feed to it.\n    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, keep_prob : 0.5}\n    _, l, predictions = session.run(\n      [optimizer, loss, train_prediction], feed_dict=feed_dict)\n    if (step % 500 == 0):\n      print(\"Minibatch loss at step %d: %f\" % (step, l))\n      print(\"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels))\n      print(\"Validation accuracy: %.1f%%\" % accuracy(\n        valid_prediction.eval(), valid_labels))\n      print(\"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels))\n
\n", "system": "" }, { "instruction": "Basic 1d convolution in tensorflow", "input": "", "output": "

I am sorry to say that, but your first code was almost right. You just inverted x and phi in tf.nn.conv2d:

\n\n
g = tf.Graph()\nwith g.as_default():\n    # data shape is \"[batch, in_height, in_width, in_channels]\",\n    x = tf.Variable(np.array([0.0, 0.0, 0.0, 0.0, 1.0]).reshape(1, 1, 5, 1), name=\"x\")\n    # filter shape is \"[filter_height, filter_width, in_channels, out_channels]\"\n    phi = tf.Variable(np.array([0.0, 0.5, 1.0]).reshape(1, 3, 1, 1), name=\"phi\")\n    conv = tf.nn.conv2d(\n        x,\n        phi,\n        strides=[1, 1, 1, 1],\n        padding=\"SAME\",\n        name=\"conv\")\n
\n\n
\n\n

Update: TensorFlow now supports 1D convolution since version r0.11, using tf.nn.conv1d. I previously made a guide to use them in the stackoverflow documentation (now extinct) that I'm pasting here:

\n\n
\n\n

Guide to 1D convolution

\n\n

Consider a basic example with an input of length 10, and dimension 16. The batch size is 32. We therefore have a placeholder with input shape [batch_size, 10, 16].

\n\n
batch_size = 32\nx = tf.placeholder(tf.float32, [batch_size, 10, 16])\n
\n\n

We then create a filter with width 3, and we take 16 channels as input, and output also 16 channels.

\n\n
filter = tf.zeros([3, 16, 16])  # these should be real values, not 0\n
\n\n
\n\n

Finally we apply tf.nn.conv1d with a stride and a padding:\n- stride: integer s\n- padding: this works like in 2D, you can choose between SAME and VALID. SAME will output the same input length, while VALID will not add zero padding.

\n\n

For our example we take a stride of 2, and a valid padding.\n

\n\n
output = tf.nn.conv1d(x, filter, stride=2, padding=\"VALID\")\n
\n\n

The output shape should be [batch_size, 4, 16].
\nWith padding=\"SAME\", we would have had an output shape of [batch_size, 5, 16].

\n", "system": "" }, { "instruction": "Which Google Cloud Platform service is the easiest for running Tensorflow?", "input": "", "output": "

Summing up the answers:

\n\n\n\n

Instructions to manually run TensorFlow on Compute Engine:

\n\n
    \n
  1. Create a project
  2. \n
  3. Open the Cloud Shell (a button at the top)
  4. \n
  5. List machine types: gcloud compute machine-types list. You can change the machine type I used in the next command.
  6. \n
  7. Create an instance:
  8. \n
\n\n\n\n
gcloud compute instances create tf \\\n  --image container-vm \\\n  --zone europe-west1-c \\\n  --machine-type n1-standard-2\n
\n\n
    \n
  1. Run sudo docker run -d -p 8888:8888 --name tf b.gcr.io/tensorflow-udacity/assignments:0.5.0 (change the image name to the desired one)
  2. \n
  3. Find your instance in the dashboard and edit default network.
  4. \n
  5. Add a firewall rule to allow your IP as well as protocol and port tcp:8888.
  6. \n
  7. Find the External IP of the instance from the dashboard. Open IP:8888 on your browser. Done!
  8. \n
  9. When you are finished, delete the created cluster to avoid charges.
  10. \n
\n\n

This is how I did it and it worked. I am sure there is an easier way to do it.

\n\n

More Resources

\n\n

You might be interested to learn more about:

\n\n\n\n

Good to know

\n\n\n\n

Thanks to @user728291, @MattW, @CJCullen, and @zain-rizvi

\n", "system": "" }, { "instruction": "Siamese Neural Network in TensorFlow", "input": "", "output": "

Update with tf.layers

\n\n

If you use the tf.layers module to build your network, you can simply use the argument reuse=True for the second part of the Siamese network:

\n\n
x = tf.ones((1, 3))\ny1 = tf.layers.dense(x, 4, name='h1')\ny2 = tf.layers.dense(x, 4, name='h1', reuse=True)\n\n# y1 and y2 will evaluate to the same values\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())\nprint(sess.run(y1))\nprint(sess.run(y2))  # both prints will return the same values\n
\n\n
\n\n

Old answer with tf.get_variable

\n\n

You can try using the function tf.get_variable(). (See the tutorial)

\n\n

Implement the first network using a variable scope with reuse=False:

\n\n
with tf.variable_scope('Inference', reuse=False):\n    weights_1 = tf.get_variable('weights', shape=[1, 1],\n                              initializer=...)\n    output_1 = weights_1 * input_1\n
\n\n

Then implement the second with the same code except using reuse=True

\n\n
with tf.variable_scope('Inference', reuse=True):\n    weights_2 = tf.get_variable('weights')\n    output_2 = weights_2 * input_2\n
\n\n

The first implementation will create and initialize every variable of the LSTM, whereas the second implementation will use tf.get_variable() to get the same variables used in the first network. That way, variables will be shared.

\n\n

Then you just have to use whatever loss you want (e.g. you can use the L2 distance between the two siamese networks), and the gradients will backpropagate through both networks, updating the shared variables with the sum of the gradients.

\n", "system": "" }, { "instruction": "What is the purpose of graph collections in TensorFlow?", "input": "", "output": "

Remember that under the hood, Tensorflow is a system for specifying and then executing computational data flow graphs. The graph collections are used as part of keeping track of the constructed graphs and how they must be executed. For example, when you create certain kinds of ops, such as tf.train.batch_join, the code that adds the op will also add some queue runners to the QUEUE_RUNNERS graph collection. Later, when you call start_queue_runners(), by default, it will look at the QUEUE_RUNNERS collection to know which runners to start.

\n", "system": "" }, { "instruction": "TensorFlow Error found in Tutorial", "input": "", "output": "

I figured it out. As you see in the value error, it says No default session is registered. Use 'with DefaultSession(sess)' or pass an explicit session to eval(session=sess) so the answer I came up with is to pass an explicit session to eval, just like it says. Here is where I made the changes.

\n\n
if i%100 == 0:\n        train_accuracy = accuracy.eval(session=sess, feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})\n
\n\n

And

\n\n
train_step.run(session=sess, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})\n
\n\n

Now the code is working fine.

\n", "system": "" }, { "instruction": "Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext", "input": "", "output": "

The issue has resolved after I uninstalled and re-installed pyspark using following commands:

\n
pip uninstall pyspark\npip install pyspark\n
\n", "system": "" }, { "instruction": "how to normalize input data for models in tensorflow", "input": "", "output": "

There are different ways of \"normalizing data\". Depending which one you have in mind, it may or may not be easy to implement in your case.

\n\n

1. Fixed normalization

\n\n

If you know the fixed range(s) of your values (e.g. feature #1 has values in [-5, 5], feature #2 has values in [0, 100], etc.), you could easily pre-process your feature tensor in parse_example(), e.g.:

\n\n
def normalize_fixed(x, current_range, normed_range):\n    current_min, current_max = tf.expand_dims(current_range[:, 0], 1), tf.expand_dims(current_range[:, 1], 1)\n    normed_min, normed_max = tf.expand_dims(normed_range[:, 0], 1), tf.expand_dims(normed_range[:, 1], 1)\n    x_normed = (x - current_min) / (current_max - current_min)\n    x_normed = x_normed * (normed_max - normed_min) + normed_min\n    return x_normed\n\ndef parse_example(line_batch, \n                  fixed_range=[[-5, 5], [0, 100], ...],\n                  normed_range=[[0, 1]]):\n    # ...\n    features = tf.transpose(features)\n    features = normalize_fixed(features, fixed_range, normed_range)\n    # ...\n
\n\n

2. Per-sample normalization

\n\n

If your features are supposed to have approximately the same range of values, per-sample normalization could also be considered, i.e. applying normalization considering the features moments (mean, variance) for each sample:

\n\n
def normalize_with_moments(x, axes=[0, 1], epsilon=1e-8):\n    mean, variance = tf.nn.moments(x, axes=axes)\n    x_normed = (x - mean) / tf.sqrt(variance + epsilon) # epsilon to avoid dividing by zero\n    return x_normed\n\ndef parse_example(line_batch):\n    # ...\n    features = tf.transpose(features)\n    features = normalize_with_moments(features)\n    # ...\n
\n\n

3. Batch normalization

\n\n

You could apply the same procedure over a complete batch instead of per-sample, which may make the process more stable:

\n\n
data_batch = normalize_with_moments(data_batch, axis=[1, 2])\n
\n\n

Similarly, you could use tf.nn.batch_normalization

\n\n

4. Dataset normalization

\n\n

Normalizing using the mean/variance computed over the whole dataset would be the trickiest, since as you mentioned it is a large, split one.

\n\n

tf.data.Dataset isn't really meant for such global computation. A solution would be to use whatever tools you have to pre-compute the dataset moments, then use this information for your TF pre-processing.

\n\n
\n\n

As mentioned by @MiniQuark, Tensorflow has a Transform library you could use to preprocess your data. Have a look at the Get Started, or for instance at the tft.scale_to_z_score() method for sample normalization.

\n", "system": "" }, { "instruction": "Why tensorflow uses channel-last ordering instead of row-major?", "input": "", "output": "

Here's the explanation:

\n\n

https://www.tensorflow.org/performance/performance_guide#use_nchw_image_data_format

\n\n
\n

Image data format refers to the representation of batches of images. TensorFlow supports NHWC (TensorFlow default) and NCHW (cuDNN default). N refers to the number of images in a batch, H refers to the number of pixels in the vertical dimension, W refers to the number of pixels in the horizontal dimension, and C refers to the channels (e.g. 1 for black and white, 3 for RGB, etc.) Although cuDNN can operate on both formats, it is faster to operate in its default format.

\n \n

The best practice is to build models that work with both NCHW and NHWC as it is common to train using NCHW on GPU, and then do inference with NHWC on CPU.

\n \n

The very brief history of these two formats is that TensorFlow started by using NHWC because it was a little faster on CPUs. Then the TensorFlow team discovered that NCHW performs better when using the NVIDIA cuDNN library. The current recommendation is that users support both formats in their models. In the long term, we plan to rewrite graphs to make switching between the formats transparent.

\n
\n\n

Moreover, digging into the code we can see here that when the input is in the format NHWC, tensorflow converts it for you to NCHW.

\n\n
  if (data_format == FORMAT_NHWC) {\n    // Convert the input tensor from NHWC to NCHW.\n    TensorShape nchw_shape =\n        ShapeFromFormat(FORMAT_NCHW, in_batch, in_rows, in_cols, in_depths);\n    if (in_depths > 1) {\n      Tensor transformed_input;\n      OP_REQUIRES_OK(ctx, ctx->allocate_temp(DataTypeToEnum<T>::value,\n                                             nchw_shape, &transformed_input));\n      functor::NHWCToNCHW<GPUDevice, T, 4>()(\n          ctx->eigen_device<GPUDevice>(),\n          const_cast<const Tensor&>(input).tensor<T, 4>(),\n          transformed_input.tensor<T, 4>());\n      input = transformed_input;\n    } else {\n      // If depth <= 1, then just reshape.\n      CHECK(input.CopyFrom(input, nchw_shape));\n    }\n  }\n
\n\n

You can specify the data format you want to use for every operation but tensorflow at default doesn't use NCHW but NHWC, that's why even the TF defelopers still use NHWC to avoid to specify in every operation the format

\n", "system": "" }, { "instruction": "Does TensorFlow job use multiple cores by default?", "input": "", "output": "

I found existing answer to this question. All cores are wrapped in cpu:0, i.e., TensorFlow does indeed use multiple CPU cores by default.

\n", "system": "" }, { "instruction": "What is SYCL 1.2?", "input": "", "output": "

SYCL is a C++ abstraction layer for OpenCL. TensorFlow's experimental support for OpenCL uses SYCL, in conjunction with a SYCL-aware C++ compiler.

\n\n

As Yaroslav pointed out in his comment, SYCL is only required if you are building TensorFlow with OpenCL support. The following question during the execution of ./configure asks about OpenCL support:

\n\n
Do you wish to build TensorFlow with OpenCL support? [y/N]\n
\n\n

If you answer N, you will not have to supply a SYCL path.

\n", "system": "" }, { "instruction": "Tensorflow TypeError: Fetch argument None has invalid type <type 'NoneType'>?", "input": "", "output": "

You are re-assigning the train_step variable to the second element of the result of sess.run() (which happens to be None). Hence, on the second iteration, train_step is None, which leads to the error.

\n\n

The fix is fortunately simple:

\n\n
for i in xrange(1, ITERATIONS):\n\n    # ...\n\n    # Discard the second element of the result.\n    numpy_state, _ = sess.run([final_state, train_step], feed_dict={\n        initial_state: numpy_state,\n        input_sequence: batch[0],\n        output_actual: batch[1]\n        })\n
\n", "system": "" }, { "instruction": "Why tensorflow uses channel-last ordering instead of row-major?", "input": "", "output": "

Here's the explanation:

\n\n

https://www.tensorflow.org/performance/performance_guide#use_nchw_image_data_format

\n\n
\n

Image data format refers to the representation of batches of images. TensorFlow supports NHWC (TensorFlow default) and NCHW (cuDNN default). N refers to the number of images in a batch, H refers to the number of pixels in the vertical dimension, W refers to the number of pixels in the horizontal dimension, and C refers to the channels (e.g. 1 for black and white, 3 for RGB, etc.) Although cuDNN can operate on both formats, it is faster to operate in its default format.

\n \n

The best practice is to build models that work with both NCHW and NHWC as it is common to train using NCHW on GPU, and then do inference with NHWC on CPU.

\n \n

The very brief history of these two formats is that TensorFlow started by using NHWC because it was a little faster on CPUs. Then the TensorFlow team discovered that NCHW performs better when using the NVIDIA cuDNN library. The current recommendation is that users support both formats in their models. In the long term, we plan to rewrite graphs to make switching between the formats transparent.

\n
\n\n

Moreover, digging into the code we can see here that when the input is in the format NHWC, tensorflow converts it for you to NCHW.

\n\n
  if (data_format == FORMAT_NHWC) {\n    // Convert the input tensor from NHWC to NCHW.\n    TensorShape nchw_shape =\n        ShapeFromFormat(FORMAT_NCHW, in_batch, in_rows, in_cols, in_depths);\n    if (in_depths > 1) {\n      Tensor transformed_input;\n      OP_REQUIRES_OK(ctx, ctx->allocate_temp(DataTypeToEnum<T>::value,\n                                             nchw_shape, &transformed_input));\n      functor::NHWCToNCHW<GPUDevice, T, 4>()(\n          ctx->eigen_device<GPUDevice>(),\n          const_cast<const Tensor&>(input).tensor<T, 4>(),\n          transformed_input.tensor<T, 4>());\n      input = transformed_input;\n    } else {\n      // If depth <= 1, then just reshape.\n      CHECK(input.CopyFrom(input, nchw_shape));\n    }\n  }\n
\n\n

You can specify the data format you want to use for every operation but tensorflow at default doesn't use NCHW but NHWC, that's why even the TF defelopers still use NHWC to avoid to specify in every operation the format

\n", "system": "" }, { "instruction": "SavedModel file does not exist when using Tensorflow hub", "input": "", "output": "

So, just deleting that folder and running the hub.load() function again solves the issue

\n", "system": "" }, { "instruction": "TensorFlow 2.0: do you need a @tf.function decorator on top of each function?", "input": "", "output": "

@tf.function converts a Python function to its graph representation.

\n\n

The pattern to follow is to define the training step function, that's the most computationally intensive function, and decorate it with @tf.function.

\n\n

Usually, the code looks like:

\n\n
#model,loss, and optimizer defined previously\n\n@tf.function\ndef train_step(features, labels):\n   with tf.GradientTape() as tape:\n        predictions = model(features)\n        loss_value = loss(labels, predictions)\n    gradients = tape.gradient(loss, model.trainable_variables)\n    optimizer.apply_gradients(zip(gradients, model.trainable_variables))\n    return loss_value\n\nfor features, labels in dataset:\n    lv = train_step(features, label)\n    print(\"loss: \", lv)\n
\n", "system": "" }, { "instruction": "logits and labels must be broadcastable error in Tensorflow RNN", "input": "", "output": "

Make sure that the number of labels in the final classification layer is equal to the number of classes you have in your dataset. InvalidArgumentError (see above for traceback): logits and labels must be broadcastable: logits_size=[1,2] labels_size=[1,24] as shown in your question might suggest that you are have just two classes in your final classification layer while you actually need 24.

\n

In my case, I had 7 classes in my dataset, but I mistakenly used 4 labels in the final classification layer. Therefore, I had to change from

\n

tf.keras.layers.Dense(4, activation="softmax")

\n

to

\n

tf.keras.layers.Dense(7, activation="softmax")

\n", "system": "" }, { "instruction": "tf.multiply vs tf.matmul to calculate the dot product", "input": "", "output": "

tf.multiply(X, Y) or the * operator does element-wise multiplication so that:

\n
[[1 2]    [[1 3]      [[1 6]\n [3 4]] .  [2 1]]  =   [6 4]]\n
\n

wheras tf.matmul does matrix multiplication so that:

\n
[[1 0]    [[1 3]      [[1 3]\n [0 1]] .  [2 1]]  =   [2 1]]\n
\n

using tf.matmul(X, X, transpose_b=True) means that you are calculating X . X^T where ^T indicates the transposing of the matrix and . is the matrix multiplication.

\n

tf.reduce_sum(_, axis=1) takes the sum along 1st axis (starting counting with 0) which means you are suming the rows:

\n
tf.reduce_sum([[a, b], [c, d]], axis=1) = [a+b, c+d]\n
\n

This means that:

\n
tf.reduce_sum(tf.multiply(X, X), axis=1) = [X[1].X[1], ..., X[n].X[n]]\n
\n

so that is the one you want if you only want the norms of each rows. On the other hand:

\n
tf.matmul(X, X, transpose_b=True) = [\n                                      [ X[1].X[1], X[1].X[2], ..., X[1].X[n] ], \n                                      [ X[2].X[1], ..., X[2].X[n] ],\n                                       ...\n                                      [ X[n].X[1], ..., X[n].X[n] ]\n                                   ]\n
\n

so that is what you need if you want the similarity between all pairs of rows.

\n", "system": "" }, { "instruction": "How to import an saved Tensorflow model train using tf.estimator and predict on input data", "input": "", "output": "

I tried to search for a good base example, but it appears the documentation and samples are a bit scattered for this topic. So let's start with a base example: the tf.estimator quickstart.

\n\n

That particular example doesn't actually export a model, so let's do that (not need for use case 1):

\n\n
def serving_input_receiver_fn():\n  \"\"\"Build the serving inputs.\"\"\"\n  # The outer dimension (None) allows us to batch up inputs for\n  # efficiency. However, it also means that if we want a prediction\n  # for a single instance, we'll need to wrap it in an outer list.\n  inputs = {\"x\": tf.placeholder(shape=[None, 4], dtype=tf.float32)}\n  return tf.estimator.export.ServingInputReceiver(inputs, inputs)\n\nexport_dir = classifier.export_savedmodel(\n    export_dir_base=\"/path/to/model\",\n    serving_input_receiver_fn=serving_input_receiver_fn)\n
\n\n

Huge asterisk on this code: there appears to be a bug in TensorFlow 1.3 that doesn't allow you to do the above export on a \"canned\" estimator (such as DNNClassifier). For a workaround, see the \"Appendix: Workaround\" section.

\n\n

The code below references export_dir (return value from the export step) to emphasize that it is not \"/path/to/model\", but rather, a subdirectory of that directory whose name is a timestamp.

\n\n

Use Case 1: Perform prediction in the same process as training

\n\n

This is an sci-kit learn type of experience, and is already exemplified by the sample. For completeness' sake, you simply call predict on the trained model:

\n\n
classifier.train(input_fn=train_input_fn, steps=2000)\n# [...snip...]\npredictions = list(classifier.predict(input_fn=predict_input_fn))\npredicted_classes = [p[\"classes\"] for p in predictions]\n
\n\n

Use Case 2: Load a SavedModel into Python/Java/C++ and perform predictions

\n\n

Python Client

\n\n

Perhaps the easiest thing to use if you want to do prediction in Python is SavedModelPredictor. In the Python program that will use the SavedModel, we need code like this:

\n\n
from tensorflow.contrib import predictor\n\npredict_fn = predictor.from_saved_model(export_dir)\npredictions = predict_fn(\n    {\"x\": [[6.4, 3.2, 4.5, 1.5],\n           [5.8, 3.1, 5.0, 1.7]]})\nprint(predictions['scores'])\n
\n\n

Java Client

\n\n
package dummy;\n\nimport java.nio.FloatBuffer;\nimport java.util.Arrays;\nimport java.util.List;\n\nimport org.tensorflow.SavedModelBundle;\nimport org.tensorflow.Session;\nimport org.tensorflow.Tensor;\n\npublic class Client {\n\n  public static void main(String[] args) {\n    Session session = SavedModelBundle.load(args[0], \"serve\").session();\n\n    Tensor x =\n        Tensor.create(\n            new long[] {2, 4},\n            FloatBuffer.wrap(\n                new float[] {\n                  6.4f, 3.2f, 4.5f, 1.5f,\n                  5.8f, 3.1f, 5.0f, 1.7f\n                }));\n\n    // Doesn't look like Java has a good way to convert the\n    // input/output name (\"x\", \"scores\") to their underlying tensor,\n    // so we hard code them (\"Placeholder:0\", ...).\n    // You can inspect them on the command-line with saved_model_cli:\n    //\n    // $ saved_model_cli show --dir $EXPORT_DIR --tag_set serve --signature_def serving_default\n    final String xName = \"Placeholder:0\";\n    final String scoresName = \"dnn/head/predictions/probabilities:0\";\n\n    List<Tensor> outputs = session.runner()\n        .feed(xName, x)\n        .fetch(scoresName)\n        .run();\n\n    // Outer dimension is batch size; inner dimension is number of classes\n    float[][] scores = new float[2][3];\n    outputs.get(0).copyTo(scores);\n    System.out.println(Arrays.deepToString(scores));\n  }\n}\n
\n\n

C++ Client

\n\n

You'll likely want to use tensorflow::LoadSavedModel with Session.

\n\n
#include <unordered_set>\n#include <utility>\n#include <vector>\n\n#include \"tensorflow/cc/saved_model/loader.h\"\n#include \"tensorflow/core/framework/tensor.h\"\n#include \"tensorflow/core/public/session.h\"\n\nnamespace tf = tensorflow;\n\nint main(int argc, char** argv) {\n  const string export_dir = argv[1];\n\n  tf::SavedModelBundle bundle;\n  tf::Status load_status = tf::LoadSavedModel(\n      tf::SessionOptions(), tf::RunOptions(), export_dir, {\"serve\"}, &bundle);\n  if (!load_status.ok()) {\n    std::cout << \"Error loading model: \" << load_status << std::endl;\n    return -1;\n  }\n\n  // We should get the signature out of MetaGraphDef, but that's a bit\n  // involved. We'll take a shortcut like we did in the Java example.\n  const string x_name = \"Placeholder:0\";\n  const string scores_name = \"dnn/head/predictions/probabilities:0\";\n\n  auto x = tf::Tensor(tf::DT_FLOAT, tf::TensorShape({2, 4}));\n  auto matrix = x.matrix<float>();\n  matrix(0, 0) = 6.4;\n  matrix(0, 1) = 3.2;\n  matrix(0, 2) = 4.5;\n  matrix(0, 3) = 1.5;\n  matrix(0, 1) = 5.8;\n  matrix(0, 2) = 3.1;\n  matrix(0, 3) = 5.0;\n  matrix(0, 4) = 1.7;\n\n  std::vector<std::pair<string, tf::Tensor>> inputs = {{x_name, x}};\n  std::vector<tf::Tensor> outputs;\n\n  tf::Status run_status =\n      bundle.session->Run(inputs, {scores_name}, {}, &outputs);\n  if (!run_status.ok()) {\n    cout << \"Error running session: \" << run_status << std::endl;\n    return -1;\n  }\n\n  for (const auto& tensor : outputs) {\n    std::cout << tensor.matrix<float>() << std::endl;\n  }\n}\n
\n\n

Use Case 3: Serve a model using TensorFlow Serving

\n\n

Exporting models in a manner amenable to serving a Classification model requires that the input be a tf.Example object. Here's how we might export a model for TensorFlow serving:

\n\n
def serving_input_receiver_fn():\n  \"\"\"Build the serving inputs.\"\"\"\n  # The outer dimension (None) allows us to batch up inputs for\n  # efficiency. However, it also means that if we want a prediction\n  # for a single instance, we'll need to wrap it in an outer list.\n  example_bytestring = tf.placeholder(\n      shape=[None],\n      dtype=tf.string,\n  )\n  features = tf.parse_example(\n      example_bytestring,\n      tf.feature_column.make_parse_example_spec(feature_columns)\n  )\n  return tf.estimator.export.ServingInputReceiver(\n      features, {'examples': example_bytestring})\n\nexport_dir = classifier.export_savedmodel(\n    export_dir_base=\"/path/to/model\",\n    serving_input_receiver_fn=serving_input_receiver_fn)\n
\n\n

The reader is referred to TensorFlow Serving's documentation for more instructions on how to setup TensorFlow Serving, so I'll only provide the client code here:

\n\n
  # Omitting a bunch of connection/initialization code...\n  # But at some point we end up with a stub whose lifecycle\n  # is generally longer than that of a single request.\n  stub = create_stub(...)\n\n  # The actual values for prediction. We have two examples in this\n  # case, each consisting of a single, multi-dimensional feature `x`.\n  # This data here is the equivalent of the map passed to the \n  # `predict_fn` in use case #2.\n  examples = [\n    tf.train.Example(\n      features=tf.train.Features(\n        feature={\"x\": tf.train.Feature(\n          float_list=tf.train.FloatList(value=[6.4, 3.2, 4.5, 1.5]))})),\n    tf.train.Example(\n      features=tf.train.Features(\n        feature={\"x\": tf.train.Feature(\n          float_list=tf.train.FloatList(value=[5.8, 3.1, 5.0, 1.7]))})),\n  ]\n\n  # Build the RPC request.\n  predict_request = predict_pb2.PredictRequest()\n  predict_request.model_spec.name = \"default\"\n  predict_request.inputs[\"examples\"].CopyFrom(\n      tensor_util.make_tensor_proto(examples, tf.float32))\n\n  # Perform the actual prediction.\n  stub.Predict(request, PREDICT_DEADLINE_SECS)\n
\n\n

Note that the key, examples, that is referenced in the predict_request.inputs needs to match the key used in the serving_input_receiver_fn at export time (cf. the constructor to ServingInputReceiver in that code).

\n\n

Appendix: Working around Exports from Canned Models in TF 1.3

\n\n

There appears to be a bug in TensorFlow 1.3 in which canned models do not export properly for use case 2 (the problem does not exist for \"custom\" estimators). Here's is a workaround that wraps a DNNClassifier to make things work, specifically for the Iris example:

\n\n
# Build 3 layer DNN with 10, 20, 10 units respectively.\nclass Wrapper(tf.estimator.Estimator):\n  def __init__(self, **kwargs):\n    dnn = tf.estimator.DNNClassifier(**kwargs)\n\n    def model_fn(mode, features, labels):\n      spec = dnn._call_model_fn(features, labels, mode)\n      export_outputs = None\n      if spec.export_outputs:\n        export_outputs = {\n           \"serving_default\": tf.estimator.export.PredictOutput(\n                  {\"scores\": spec.export_outputs[\"serving_default\"].scores,\n                   \"classes\": spec.export_outputs[\"serving_default\"].classes})}\n\n      # Replace the 3rd argument (export_outputs)\n      copy = list(spec)\n      copy[4] = export_outputs\n      return tf.estimator.EstimatorSpec(mode, *copy)\n\n    super(Wrapper, self).__init__(model_fn, kwargs[\"model_dir\"], dnn.config)\n\nclassifier = Wrapper(feature_columns=feature_columns,\n                     hidden_units=[10, 20, 10],\n                     n_classes=3,\n                     model_dir=\"/tmp/iris_model\")\n
\n", "system": "" }, { "instruction": "Install TensorFlow with specific version on Anaconda", "input": "", "output": "

I find the existing answers unsatisfying, as the OP asked specifically about Anaconda but the answers are just pip installs.

\n

You can list the available versions for install doing

\n
conda search tensorflow-gpu\n
\n

which should give you some output that looks like

\n
Loading channels: done\n# Name                       Version           Build  Channel             \ntensorflow-gpu                 1.4.1               0  pkgs/main           \ntensorflow-gpu                 1.5.0               0  pkgs/main           \ntensorflow-gpu                 1.6.0               0  pkgs/main           \ntensorflow-gpu                 1.7.0               0  pkgs/main           \ntensorflow-gpu                 1.8.0      h7b35bdc_0  pkgs/main           \ntensorflow-gpu                 1.9.0      hf154084_0  pkgs/main           \ntensorflow-gpu                1.10.0      hf154084_0  pkgs/main           \ntensorflow-gpu                1.11.0      h0d30ee6_0  pkgs/main           \ntensorflow-gpu                1.12.0      h0d30ee6_0  pkgs/main           \ntensorflow-gpu                1.13.1      h0d30ee6_0  pkgs/main           \ntensorflow-gpu                1.14.0      h0d30ee6_0  pkgs/main           \ntensorflow-gpu                1.15.0      h0d30ee6_0  pkgs/main           \ntensorflow-gpu                 2.0.0      h0d30ee6_0  pkgs/main           \ntensorflow-gpu                 2.1.0      h0d30ee6_0  pkgs/main           \ntensorflow-gpu                 2.2.0      h0d30ee6_0  pkgs/main\n
\n

If you need to specify a particular channel, the -c/--channel option is your friend, for example:

\n
conda search -c conda-forge tensorflow-gpu\n
\n

Then you can select your version by passing it to the install command, for example:

\n
conda install tensorflow-gpu==2.0.0\n
\n

If you needed the channel option in your search, you should add the same option to the conda install command. Note this will work the same for tensorflow (i.e. not the GPU version), just change the package name accordingly.

\n

YAML Configuration

\n

If you use YAML environment configuration files, you can do the same thing:

\n
# environment.yaml\nname: my_conda_env\nchannels:\n  - conda-forge\ndependencies:\n  - tensorflow-gpu=2.0.0\n
\n

Create your environment with command:

\n
conda env create -f environment.yaml\n
\n

or if you change the version of an already created environment:

\n
conda env update -f environment.yaml\n
\n", "system": "" }, { "instruction": "TensorFlow:ValueError: 'images' contains no shape", "input": "", "output": "

It's important to pass expand_animations = False as an argument:

\n\n

Try:

\n\n
tf.image.decode_image(img, expand_animations = False) \n
\n\n

to make sure you have a tensor with a 3-dimensional shape.\nThis problem is due to gif format because decode_gif returns a 4-D array [num_frames, height, width, 3] as opposed to other formats including decode_bmp, decode_jpeg, and decode_png, which return 3-D arrays [height, width, num_channels].

\n\n

For more information check the related documentation

\n", "system": "" }, { "instruction": "Difference between tf.clip_by_value and tf.clip_by_global_norm for RNN's and how to decide max value to clip on?", "input": "", "output": "

TL;DR: use tf.clip_by_global_norm for gradient clipping, with "some high value" as max value.

\n

clip_by_value

\n

tf.clip_by_value clips each value inside one tensor, regardless of the other values in the tensor. For instance,

\n
tf.clip_by_value([-1, 2, 10], 0, 3)  -> [0, 2, 3]  # Only the values below 0 or above 3 are changed\n
\n

Consequently, it can change the direction of the tensor, so it should be used if the values in the tensor are decorrelated one from another (which is not the case for gradient clipping), or to avoid zero / infinite values in a tensor that could lead to Nan / infinite values elsewhere (by clipping with a minimum of epsilon=1e-8 and a very big max value for instance).

\n

clip_by_norm

\n

tf.clip_by_norm rescales one tensor if necessary, so that its L2 norm does not exceed a certain threshold. It's useful typically to avoid exploding gradient on one tensor, because you keep the gradient direction. For instance:

\n
tf.clip_by_norm([-2, 3, 6], 5)  -> [-2, 3, 6]*5/7  # The original L2 norm is 7, which is >5, so the final one is 5\ntf.clip_by_norm([-2, 3, 6], 9)  -> [-2, 3, 6]  # The original L2 norm is 7, which is <9, so it is left unchanged\n
\n

However, clip_by_norm works on only one gradient, so if you use it on all your gradient tensors, you'll unbalance them (some will be rescaled, others not, and not all with the same scale).

\n

Note that the two first ones work on only one tensor, while the last one is used on a list of tensors.

\n

clip_by_global_norm

\n

tf.clip_by_global_norm rescales a list of tensors so that the total norm of the vector of all their norms does not exceed a threshold. The goal is the same as clip_by_norm (avoid exploding gradient, keep the gradient directions), but it works on all the gradients at once rather than on each one separately (that is, all of them are rescaled by the same factor if necessary, or none of them are rescaled). This is better, because the balance between the different gradients is maintained.

\n

For instance:

\n
tf.clip_by_global_norm([tf.constant([-2, 3, 6]),tf.constant([-4, 6, 12])] , 14.5)\n
\n

will rescale both tensors by a factor 14.5/sqrt(49 + 196), because the first tensor has a L2 norm of 7, the second one 14, and sqrt(7^2+ 14^2)>14.5

\n

This (tf.clip_by_global_norm) is the one that you should use for gradient clipping. See this for instance for more information.

\n

Choosing the value

\n

Choosing the max value is the hardest part. You should use the biggest value such that you don't have exploding gradient (whose effects can be Nans or infinite values appearing in your tensors, constant loss /accuracy after a few training steps). The value should be bigger for tf.clip_by_global_norm than for the others, since the global L2 norm will be mechanically bigger than the other ones due to the number of tensors implied.

\n", "system": "" }, { "instruction": "Tensorflow: How to feed a placeholder variable with a tensor?", "input": "", "output": "

This has been discussed on GitHub in 2016, and please check here. Here is the key point by concretevitamin:

\n\n
\n

One key thing to note is that Tensor is simply a symbolic object. The values of your feed_dict are the actual values, e.g. a Numpy ndarry.

\n
\n\n

The tensor as a symbolic object is flowing in the graph while the actual values are outside of it, then we can only pass the actual values into the graph and the symbolic object can not exist outside the graph.

\n", "system": "" }, { "instruction": "tensorflow record with float numpy array", "input": "", "output": "

FloatList and BytesList expect an iterable. So you need to pass it a list of floats. Remove the extra brackets in your _float_feature, ie

\n\n
def _floats_feature(value):\n  return tf.train.Feature(float_list=tf.train.FloatList(value=value))\n\nnumpy_arr = np.ones((3,)).astype(np.float)\nexample = tf.train.Example(features=tf.train.Features(feature={\"bytes\": _floats_feature(numpy_arr)}))\nprint(example)\n\nfeatures {\n  feature {\n    key: \"bytes\"\n    value {\n      float_list {\n        value: 1.0\n        value: 1.0\n        value: 1.0\n      }\n    }\n  }\n}\n
\n", "system": "" }, { "instruction": "Implementation difference between TensorFlow Variable and TensorFlow Tensor", "input": "", "output": "

Before explaining the distinction between tensors and variables, we should be precise about what the word \"tensor\" means in the context of TensorFlow:

\n\n\n\n

This distinction is a little confusing, and we might choose different names if we started over (in other language APIs, we prefer the name Output for a symbolic result and Tensor for a concrete value).

\n\n

A similar distinction exists for variables. In the Python API, a tf.Variable is the symbolic representation of a variable, which has methods for creating operations that read the current value of the variable, and assign values to it. In the C++ implementation, a tensorflow::Var object is a wrapper around a shared, mutable tensorflow::Tensor object.

\n\n

With that context out the way, we can address your specific questions:

\n\n
    \n
  1. What is the meaning of \"in-memory buffers\"?

    \n\n

    An in-memory buffer is simply a contiguous region of memory that has been allocated with a TensorFlow allocator. tensorflow::Tensor objects contain a pointer to an in-memory buffer, which holds the values of that tensor. The buffer could be in host memory (i.e. accessible from the CPU) or device memory (e.g. accessible only from a GPU), and TensorFlow has operations to move data between these memory spaces.

  2. \n
  3. What is the meaning of a \"handle\"?

    \n\n

    In the explanation in the paper, the word \"handle\" is used in a couple of different ways, which are slightly different from how TensorFlow uses the term. The paper uses \"symbolic handle\" to refer to a tf.Tensor object, and \"persistent, mutable handle\" to refer to a tf.Variable object. The TensorFlow codebase uses \"handle\" to refer to a name for a stateful object (like a tf.FIFOQueue or tf.TensorArray) that can be passed around without copying all of the values (i.e. call-by-reference).

  4. \n
  5. Is my initial assumption about the internal of a tensor correct?

    \n\n

    Your assumption most closely matches the definition of a (C++) tensorflow::Tensor object. The (Python) tf.Tensor object is more complicated because it refers to a function for computing a value, rather than the value itself.

  6. \n
  7. What is the essential internal implementation difference between a tensor and a variable?

    \n\n

    In C++, a tensorflow::Tensor and tensorflow::Var are very similar; the only different is that tensorflow::Var also has a mutex that can be used to lock the variable when it is being updated.

    \n\n

    In Python, the essential difference is that a tf.Tensor is implemented as a dataflow graph, and it is read-only (i.e. by calling tf.Session.run()). A tf.Variable can be both read (i.e. by evaluating its read operation) and written (e.g. by running an assign operation).

    \n\n

    Why are they declared differently and why is that difference essential to TensorFlow?

    \n\n

    Tensors and variables serve different purposes. Tensors (tf.Tensor objects) can represent complex compositions of mathematical expressions, like loss functions in a neural network, or symbolic gradients. Variables represent state that is updated over time, like weight matrices and convolutional filters during training. While in principle you could represent the evolving state of a model without variables, you would end up with a very large (and repetetive) mathematical expression, so variables provide a convenient way to materialize the state of the model, and—for example—share it with other machines for parallel training.

  8. \n
\n", "system": "" }, { "instruction": "Changing the scale of a tensor in tensorflow", "input": "", "output": "

You are trying to normalize the data. A classic normalization formula is this one:

\n
normalize_value = (value \u2212 min_value) / (max_value \u2212 min_value)\n
\n

The implementation on tensorflow will look like this:

\n
tensor = tf.div(\n   tf.subtract(\n      tensor, \n      tf.reduce_min(tensor)\n   ), \n   tf.subtract(\n      tf.reduce_max(tensor), \n      tf.reduce_min(tensor)\n   )\n)\n
\n

All the values of the tensor will be betweetn 0 and 1.

\n

IMPORTANT: make sure the tensor has float/double values, or the output tensor will have just zeros and ones. If you have a integer tensor call this first:

\n
tensor = tf.to_float(tensor)\n
\n

Update: as of tensorflow 2, tf.to_float() is deprecated and instead, tf.cast() should be used:

\n
tensor = tf.cast(tensor, dtype=tf.float32) # or any other tf.dtype, that is precise enough\n
\n", "system": "" }, { "instruction": "In Tensorflow, how to assign values in Tensor according to the indices?", "input": "", "output": "

There's a pending PR that should fix this:

\n\n

https://github.com/tensorflow/tensorflow/issues/1793

\n", "system": "" }, { "instruction": "What's difference between tf.sub and just minus operation in tensorflow?", "input": "", "output": "

Yes, - and + resolve to tf.sub ad tf.add. If you look at the tensorflow code you will see that these operators on tf.Variable are overloaded with the tf.* methods.

\n\n

As to why both exists I assume the tf.* ones exist for consistency. So sub and say matmul operation can be used in the same way. While the operator overloading is for convenience.

\n", "system": "" }, { "instruction": "Example for Deploying a Tensorflow Model via a RESTful API", "input": "", "output": "

https://github.com/sugyan/tensorflow-mnist shows a simple restAPI example by using Flask and loading pre-trained mode (restore).

\n\n
@app.route('/api/mnist', methods=['POST'])\ndef mnist():\n    input = ((255 - np.array(request.json, dtype=np.uint8)) / 255.0).reshape(1, 784)\n    output1 = simple(input)\n    output2 = convolutional(input)\n    return jsonify(results=[output1, output2])\n
\n\n

Also, see the online demo at https://tensorflow-mnist.herokuapp.com/. It seems the API is fast enough.

\n", "system": "" }, { "instruction": "TensorFlow object detection TF-TRT Warning: Could not find TensorRT", "input": "", "output": "

Try to install pip install tensorrt.

\n

Perhaps you need to read this> How do I install Python packages in Google's Colab?

\n

And check if a GPU is allocated with google colab.

\n", "system": "" }, { "instruction": "Cannot install TensorFlow 1.x", "input": "", "output": "

What I've found on discourse:

\n

You just need to make sure you\u2019re using Python 3.5, 3.6 or 3.7. TensorFlow 1.15 does not support Python 3.8

\n", "system": "" }, { "instruction": "UsageError: Line magic function `%tensorflow_version` not found", "input": "", "output": "

Jupyter notebook comes with a set of magic functions, but %tensorflow_version is not one of them. The magic command

\n\n
%tensorflow_version X.X\n
\n\n

is only available in Google Colab notebooks, not Jupyter notebooks.

\n", "system": "" }, { "instruction": "TF2.0: Translation model: Error when restoring the saved model: Unresolved object in checkpoint (root).optimizer.iter: attributes", "input": "", "output": "

It means you are not using all the checkpointed values you have restored.

\n\n

It happens because you are restoring a model that has training information (such as optimizer variables) but you are only using it for prediction, not training. When predicting, you don't need the saved optimizer values, which is why the program is telling you they were not used.

\n\n

If you were using this restored model for training on new data, this warning would disappear.

\n\n

You could silence these warning with model.load_weights(...).expect_partial() or tf.train.Checkpoint.restore(...).expect_partial().

\n\n

A better solution would be to only save the variables required for inference when training:

\n\n
saver = tf.train.Saver(tf.model_variables())\n
\n\n

tf.model_variables() is the subset of Variable objects that are used in the model for inference (see tensorflow doc).

\n", "system": "" }, { "instruction": "The name tf.Session is deprecated. Please use tf.compat.v1.Session instead", "input": "", "output": "

To make TensorFlow be more \"Pythonic\" in version 2.0, by design TF 2.0 does not have tf.Session.

\n\n

TensorFlow 1.X requires users to manually stitch together an abstract syntax tree (the graph) by making tf.* API calls. It then requires users to manually compile the abstract syntax tree by passing a set of output tensors and input tensors to a session.run() call.

\n\n

TensorFlow 2.0 executes eagerly (like Python normally does) and in 2.0, graphs and sessions should feel like implementation details.

\n\n

You could use:

\n\n
import tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\n
\n\n

However, this does not let you take advantage of many of the improvements made in TensorFlow 2.0.

\n\n

The better solution is:

\n\n\n", "system": "" }, { "instruction": "Why is TensorFlow's `tf.data` package slowing down my code?", "input": "", "output": "

I wanted to test the dataset API which seems to be really convenient for processing data. I did a lot of time testing about this API in CPU, GPU and multi-GPU way for small and large NN with different type of data.

\n\n

First thing, It seems to me that your code is ok. But I need to point that your NN is just one simple layer.

\n\n

Now, the dataset API is not suitable for your type of NN but for NN with a lot more complexity. Why ? For several reasons that I explain below (founded in my quest of understanding the dataset API).

\n\n

Firstly, in one hand the dataset API processes data each batch whereas in the other hand data are preprocessed. Therefore, if it fits your RAM, you can save time by preprocessing the data. Here your data are just to \"simple\". If you want to test what i am saying, try to find a really really big dataset to process. Nevertheless, the dataset API can be tuned with prefetching data. You can take a look to this tutorial that explain really well why it is good to process data with prefetch.

\n\n

Secondly, in my quest of dataset API for Multi-GPU training, I discovered that as far as i know the old pre-processing way is faster than dataset API for small Neural Network. You can verify that by creating a simple stackable RNN which take a sequence in input. You can try different size of stack (i have tested 1, 2, 10 and 20). You will see that, using the dataset API, on 1-GPU or on 4-GPUs, the time did not differ for small RNN stacks (1, 2 and 5).

\n\n

To summarize, the dataset API is suitable for Neural Network that have data that can't be pre-process. Depending on your task, it may be more convenient to pre-process data, for example if you want to tweak your NN in order to improve it. I agree that the dataset API is really cool for batch, padding and also convenient for shuffling large amount of data but it's also not suitable for multi-GPU training.

\n", "system": "" }, { "instruction": "How to create mask images from COCO dataset?", "input": "", "output": "

The complete code wasn't in the answer so I post it below.

\n

Please install pycocotools first.

\n
pip install pycocotools\n
\n

Import the required modules. I'm assuming you're using a jupyter notebook.

\n
from pycocotools.coco import COCO\nimport os\nfrom PIL import Image\nimport numpy as np\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n
\n

Load the annotations for the coco dataset. Here, specify the 74 image.

\n
coco = COCO('../datasets/coco/annotations/instances_train2017.json')\nimg_dir = '../datasets/coco/train2017'\nimage_id = 74\n\nimg = coco.imgs[image_id]\n# loading annotations into memory...\n# Done (t=12.70s)\n# creating index...\n# index created!\n
\n

The information of the loaded img is as follows.

\n
img\n# {'license': 2,\n#  'file_name': '000000000074.jpg',\n#  'coco_url': # 'http://images.cocodataset.org/train2017/000000000074.jpg',\n#  'height': 426,\n#  'width': 640,\n#  'date_captured': '2013-11-15 03:08:44',\n#  'flickr_url': # 'http://farm5.staticflickr.com/4087/5078192399_aaefdb5074_z.jpg# ',\n#  'id': 74}\n
\n

Display the image as follows.

\n
image = np.array(Image.open(os.path.join(img_dir, img['file_name'])))\nplt.imshow(image, interpolation='nearest')\nplt.show()\n
\n

\"enter

\n

If you want to see the overlay result:

\n
plt.imshow(image)\ncat_ids = coco.getCatIds()\nanns_ids = coco.getAnnIds(imgIds=img['id'], catIds=cat_ids, iscrowd=None)\nanns = coco.loadAnns(anns_ids)\ncoco.showAnns(anns)\n
\n

\"enter

\n

If you just want to see the mask, as Farshid Rayhan replied, do the following:

\n
mask = coco.annToMask(anns[0])\nfor i in range(len(anns)):\n    mask += coco.annToMask(anns[i])\n\nplt.imshow(mask)\n
\n

\"enter

\n", "system": "" }, { "instruction": "Tensorflow: When should I use or not use `feed_dict`?", "input": "", "output": "

In a tensorflow model you can define a placeholder such as x = tf.placeholder(tf.float32), then you will use x in your model.

\n\n

For example, I define a simple set of operations as:

\n\n
x = tf.placeholder(tf.float32)\ny = x * 42\n
\n\n

Now when I ask tensorflow to compute y, it's clear that y depends on x.

\n\n
with tf.Session() as sess:\n  sess.run(y)\n
\n\n

This will produce an error because I did not give it a value for x. In this case, because x is a placeholder, if it gets used in a computation you must pass it in via feed_dict. If you don't it's an error.

\n\n

Let's fix that:

\n\n
with tf.Session() as sess:\n  sess.run(y, feed_dict={x: 2})\n
\n\n

The result this time will be 84. Great. Now let's look at a trivial case where feed_dict is not needed:

\n\n
x = tf.constant(2)\ny = x * 42\n
\n\n

Now there are no placeholders (x is a constant) and so nothing needs to be fed to the model. This works now:

\n\n
with tf.Session() as sess:\n  sess.run(y)\n
\n", "system": "" }, { "instruction": "TensorFlow - tf.data.Dataset reading large HDF5 files", "input": "", "output": "

I stumbled across this question while dealing with a similar issue. I came up with a solution based on using a Python generator, together with the TF dataset construction method from_generator. Because we use a generator, the HDF5 file should be opened for reading only once and kept open as long as there are entries to read. So it will not be opened, read, and then closed for every single call to get the next data element.

\n\n

Generator definition

\n\n

To allow the user to pass in the HDF5 filename as an argument, I generated a class that has a __call__ method since from_generator specifies that the generator has to be callable. This is the generator:

\n\n
import h5py\nimport tensorflow as tf\n\nclass generator:\n    def __init__(self, file):\n        self.file = file\n\n    def __call__(self):\n        with h5py.File(self.file, 'r') as hf:\n            for im in hf[\"train_img\"]:\n                yield im\n
\n\n

By using a generator, the code should pick up from where it left off at each call from the last time it returned a result, instead of running everything from the beginning again. In this case it is on the next iteration of the inner for loop. So this should skip opening the file again for reading, keeping it open as long as there is data to yield. For more on generators, see this excellent Q&A.

\n\n

Of course, you will have to replace anything inside the with block to match how your dataset is constructed and what outputs you want to obtain.

\n\n

Usage example

\n\n
ds = tf.data.Dataset.from_generator(\n    generator(hdf5_path), \n    tf.uint8, \n    tf.TensorShape([427,561,3]))\n\nvalue = ds.make_one_shot_iterator().get_next()\n\n# Example on how to read elements\nwhile True:\n    try:\n        data = sess.run(value)\n        print(data.shape)\n    except tf.errors.OutOfRangeError:\n        print('done.')\n        break\n
\n\n

Again, in my case I had stored uint8 images of height 427, width 561, and 3 color channels in my dataset, so you will need to modify these in the above call to match your use case.

\n\n

Handling multiple files

\n\n

I have a proposed solution for handling multiple HDF5 files. The basic idea is to construct a Dataset from the filenames as usual, and then use the interleave method to process many input files concurrently, getting samples from each of them to form a batch, for example.

\n\n

The idea is as follows:

\n\n
ds = tf.data.Dataset.from_tensor_slices(filenames)\n# You might want to shuffle() the filenames here depending on the application\nds = ds.interleave(lambda filename: tf.data.Dataset.from_generator(\n        generator(filename), \n        tf.uint8, \n        tf.TensorShape([427,561,3])),\n       cycle_length, block_length)\n
\n\n

What this does is open cycle_length files concurrently, and produce block_length items from each before moving to the next file - see interleave documentation for details. You can set the values here to match what is appropriate for your application: e.g., do you need to process one file at a time or several concurrently, do you only want to have a single sample at a time from each file, and so on.

\n\n

Edit: for a parallel version, take a look at tf.contrib.data.parallel_interleave!

\n\n

Possible caveats

\n\n

Be aware of the peculiarities of using from_generator if you decide to go with the solution. For Tensorflow 1.6.0, the documentation of from_generator mentions these two notes.

\n\n

It may be challenging to apply this across different environments or with distributed training:

\n\n
\n

NOTE: The current implementation of Dataset.from_generator() uses\n tf.py_func and inherits the same constraints. In particular, it\n requires the Dataset- and Iterator-related operations to be placed on\n a device in the same process as the Python program that called\n Dataset.from_generator(). The body of generator will not be serialized\n in a GraphDef, and you should not use this method if you need to\n serialize your model and restore it in a different environment.

\n
\n\n

Be careful if the generator depends on external state:

\n\n
\n

NOTE: If generator depends on mutable global variables or other\n external state, be aware that the runtime may invoke generator\n multiple times (in order to support repeating the Dataset) and at any\n time between the call to Dataset.from_generator() and the production\n of the first element from the generator. Mutating global variables or\n external state can cause undefined behavior, and we recommend that you\n explicitly cache any external state in generator before calling\n Dataset.from_generator().

\n
\n", "system": "" }, { "instruction": "Uninstalling TensorFlow from Anaconda environment", "input": "", "output": "

You can remove a package with the conda remove command. So for TensorFlow this would be conda remove tensorflow.

\n", "system": "" }, { "instruction": "What is right batch normalization function in Tensorflow?", "input": "", "output": "

Just to add to the list, there're several more ways to do batch-norm in tensorflow:

\n\n\n", "system": "" }, { "instruction": "DQN - Q-Loss not converging", "input": "", "output": "

Yes, the loss must coverage, because of the loss value means the difference between expected Q value and current Q value. Only when loss value converges, the current approaches optimal Q value. If it diverges, this means your approximation value is less and less accurate.

\n\n

Maybe you can try adjusting the update frequency of the target network or check the gradient of each update (add gradient clipping). The addition of the target network increases the stability of the Q-learning.

\n\n

In Deepmind's 2015 Nature paper, it states that:

\n\n
\n

The second modification to online Q-learning aimed at further improving the stability of our method with neural networks is to use a separate network for generating the traget yj in the Q-learning update. More precisely, every C updates we clone the network Q to obtain a target network Q' and use Q' for generating the Q-learning targets yj for the following C updates to Q. \n This modification makes the algorithm more stable compared to standard online Q-learning, where an update that increases Q(st,at) often also increases Q(st+1, a) for all a and hence also increases the target yj, possibly leading to oscillations or divergence of the policy. Generating the targets using the older set of parameters adds a delay between the time an update to Q is made and the time the update affects the targets yj, making divergence or oscillations much more unlikely.

\n
\n\n

Human-level control through deep reinforcement\nlearning, Mnih et al., 2015

\n\n

I've made an experiment for another person asked similar questions in the Cartpole environment, and the update frequency of 100 solves the problem (achieve a maximum of 200 steps).

\n\n

When C (update frequency) = 2, Plotting of the average loss:\n\"C=2\"

\n\n

C = 10

\n\n

\"C=10\"

\n\n

C = 100

\n\n

\"enter

\n\n

C = 1000

\n\n

\"enter

\n\n

C = 10000

\n\n

\"enter

\n\n

If the divergence of loss value is caused by gradient explode, you can clip the gradient. In Deepmind's 2015 DQN, the author clipped the gradient by limiting the value within [-1, 1]. In the other case, the author of Prioritized Experience Replay clip gradient by limiting the norm within 10. Here're the examples:

\n\n

DQN gradient clipping:

\n\n
    optimizer.zero_grad()\n    loss.backward()\n    for param in model.parameters():\n        param.grad.data.clamp_(-1, 1)\n    optimizer.step()\n
\n\n

PER gradient clipping:

\n\n
    optimizer.zero_grad()\n    loss.backward()\n    if self.grad_norm_clipping:\n       torch.nn.utils.clip_grad.clip_grad_norm_(self.model.parameters(), 10)\n   optimizer.step()\n
\n", "system": "" }, { "instruction": "How to accumulate gradients in tensorflow?", "input": "", "output": "

Let's walk through the code proposed in one of the answers you linked to:

\n
## Optimizer definition - nothing different from any classical example\nopt = tf.train.AdamOptimizer()\n\n## Retrieve all trainable variables you defined in your graph\ntvs = tf.trainable_variables()\n## Creation of a list of variables with the same shape as the trainable ones\n# initialized with 0s\naccum_vars = [tf.Variable(tf.zeros_like(tv.initialized_value()), trainable=False) for tv in tvs]\nzero_ops = [tv.assign(tf.zeros_like(tv)) for tv in accum_vars]\n\n## Calls the compute_gradients function of the optimizer to obtain... the list of gradients\ngvs = opt.compute_gradients(rmse, tvs)\n\n## Adds to each element from the list you initialized earlier with zeros its gradient (works because accum_vars and gvs are in the same order)\naccum_ops = [accum_vars[i].assign_add(gv[0]) for i, gv in enumerate(gvs)]\n\n## Define the training step (part with variable value update)\ntrain_step = opt.apply_gradients([(accum_vars[i], gv[1]) for i, gv in enumerate(gvs)])\n
\n

This first part basically adds new variables and ops to your graph which will allow you to

\n
    \n
  1. Accumulate the gradient with ops accum_ops in (the list of) variable accum_vars
  2. \n
  3. Update the model weights with ops train_step
  4. \n
\n

Then, to use it when training, you have to follow these steps (still from the answer you linked):

\n
## The while loop for training\nwhile ...:\n    # Run the zero_ops to initialize it\n    sess.run(zero_ops)\n    # Accumulate the gradients 'n_minibatches' times in accum_vars using accum_ops\n    for i in xrange(n_minibatches):\n        sess.run(accum_ops, feed_dict=dict(X: Xs[i], y: ys[i]))\n    # Run the train_step ops to update the weights based on your accumulated gradients\n    sess.run(train_step)\n
\n", "system": "" }, { "instruction": "tensorflow for poets: "The name 'import/input' refers to an Operation not in the graph."", "input": "", "output": "

I changed ~/scripts/label_image.py line 77 and it works:

\n\n

from

\n\n
input_layer = \"input\"\n
\n\n

to

\n\n
input_layer = \"Mul\"\n
\n", "system": "" }, { "instruction": "Use LSTM tutorial code to predict next word in a sentence?", "input": "", "output": "
\n

My biggest question is how do I use the produced model to actually generate a next word suggestion, given the first few words of a sentence?

\n

I.e. I'm trying to write a function with the signature: getNextWord(model, sentencePrefix)

\n
\n

Before I explain my answer, first a remark about your suggestion to # Call static_rnn(cell) once for each word in prefix to initialize state: Keep in mind that static_rnn does not return a value like a numpy array, but a tensor. You can evaluate a tensor to a value when it is run (1) in a session (a session is keeps the state of your computional graph, including the values of your model parameters) and (2) with the input that is necessary to calculate the tensor value. Input can be supplied using input readers (the approach in the tutorial), or using placeholders (what I will use below).

\n

Now follows the actual answer:\nThe model in the tutorial was designed to read input data from a file. The answer of @user3080953 already showed how to work with your own text file, but as I understand it you need more control over how the data is fed to the model. To do this you will need to define your own placeholders and feed the data to these placeholders when calling session.run().

\n

In the code below I subclassed PTBModel and made it responsible for explicitly feeding data to the model. I introduced a special PTBInteractiveInput that has an interface similar to PTBInput so you can reuse the functionality in PTBModel. To train your model you still need PTBModel.

\n
class PTBInteractiveInput(object):\n  def __init__(self, config):\n    self.batch_size = 1\n    self.num_steps = config.num_steps\n    self.input_data = tf.placeholder(dtype=tf.int32, shape=[self.batch_size, self.num_steps])\n    self.sequence_len = tf.placeholder(dtype=tf.int32, shape=[])\n    self.targets = tf.placeholder(dtype=tf.int32, shape=[self.batch_size, self.num_steps])\n\nclass InteractivePTBModel(PTBModel):\n\n  def __init__(self, config):\n    input = PTBInteractiveInput(config)\n    PTBModel.__init__(self, is_training=False, config=config, input_=input)\n    output = self.logits[:, self._input.sequence_len - 1, :]\n    self.top_word_id = tf.argmax(output, axis=2)\n\n  def get_next(self, session, prefix):\n    prefix_array, sequence_len = self._preprocess(prefix)\n    feeds = {\n      self._input.sequence_len: sequence_len,\n      self._input.input_data: prefix_array,\n    }\n    fetches = [self.top_word_id]\n    result = session.run(fetches, feeds)\n    self._postprocess(result)\n\n  def _preprocess(self, prefix):\n    num_steps = self._input.num_steps\n    seq_len = len(prefix)\n    if seq_len > num_steps:\n      raise ValueError("Prefix to large for model.")\n    prefix_ids = self._prefix_to_ids(prefix)\n    num_items_to_pad = num_steps - seq_len\n    prefix_ids.extend([0] * num_items_to_pad)\n    prefix_array = np.array([prefix_ids], dtype=np.float32)\n    return prefix_array, seq_len\n\n  def _prefix_to_ids(self, prefix):\n    # should convert your prefix to a list of ids\n    pass\n\n  def _postprocess(self, result):\n    # convert ids back to strings\n    pass\n
\n

In the __init__ function of PTBModel you need to add this line:

\n
self.logits = logits\n
\n
\n

Why use a random (uninitialized, untrained) word-embedding?

\n
\n

First note that, although the embeddings are random in the beginning, they will be trained with the rest of the network. The embeddings you obtain after training will have similar properties than the embeddings you obtain with word2vec models, e.g., the ability to answer analogy questions with vector operations (king - man + woman = queen, etc.) In tasks were you have a considerable amount of training data like language modelling (which does not need annotated training data) or neural machine translation, it is more common to train embeddings from scratch.

\n
\n

Why use softmax?

\n
\n

Softmax is a function that normalizes a vector of similarity scores (the logits), to a probability distribution. You need a probability distribution to train you model with cross-entropy loss and to be able to sample from the model. Note that if you are only interested in the most likely words of a trained model, you don't need the softmax and you can use the logits directly.

\n
\n

Does the hidden layer have to match the dimension of the input (i.e. the dimension of the word2vec embeddings)

\n
\n

No, in principal it can be any value. Using a hidden state with a lower dimension than your embedding dimension, does not make much sense, however.

\n
\n

How/Can I bring in a pre-trained word2vec model, instead of that uninitialized one?

\n
\n

Here is a self-contained example of initializing an embedding with a given numpy array. If you want that the embedding remains fixed/constant during training, set trainable to False.

\n
import tensorflow as tf\nimport numpy as np\nvocab_size = 10000\nsize = 200\ntrainable=True\nembedding_matrix = np.zeros([vocab_size, size]) # replace this with code to load your pretrained embedding\nembedding = tf.get_variable("embedding",\n                            initializer=tf.constant_initializer(embedding_matrix),\n                            shape=[vocab_size, size],\n                            dtype=tf.float32,\n                            trainable=trainable)\n
\n", "system": "" }, { "instruction": "Printing extra training metrics with Tensorflow Estimator", "input": "", "output": "

From what I've read it is not possible to change it by passing parameter.\nYou can try to do by creating a logging hook and passing it into to estimator run.

\n

In the body of model_fn function for your estimator:

\n
logging_hook = tf.train.LoggingTensorHook({"loss" : loss, \n    "accuracy" : accuracy}, every_n_iter=10)\n\n# Rest of the function\n\nreturn tf.estimator.EstimatorSpec(\n    ...params...\n    training_hooks = [logging_hook])\n
\n

EDIT:

\n

To see the output you must also set logging verbosity high enough (unless its your default):\ntf.logging.set_verbosity(tf.logging.INFO)

\n", "system": "" }, { "instruction": "what does arg_scope actually do?", "input": "", "output": "

When defining convolution layers, you may always use the same padding type and the same initializer, and maybe even the same convolution size. For you pooling, maybe you are also always using the same 2x2 pooling size. And so on.

\n\n

arg_scope is a way to avoid repeating providing the same arguments over and over again to the same layer types.

\n\n

Examples from the source documentation:

\n\n
\n

Example of how to use tf.contrib.framework.arg_scope:

\n\n
from third_party.tensorflow.contrib.layers.python import layers\n  arg_scope = tf.contrib.framework.arg_scope\n  with arg_scope([layers.conv2d], padding='SAME',\n                 initializer=layers.variance_scaling_initializer(),\n                 regularizer=layers.l2_regularizer(0.05)):\n    net = layers.conv2d(inputs, 64, [11, 11], 4, padding='VALID', scope='conv1')\n    net = layers.conv2d(net, 256, [5, 5], scope='conv2')\n
\n \n

The first call to conv2d will behave as follows:

\n\n
   layers.conv2d(inputs, 64, [11, 11], 4, padding='VALID',\n                  initializer=layers.variance_scaling_initializer(),\n                  regularizer=layers.l2_regularizer(0.05), scope='conv1')\n
\n \n

The second call to conv2d will also use the arg_scope's default for padding:

\n\n
  layers.conv2d(inputs, 256, [5, 5], padding='SAME',\n                  initializer=layers.variance_scaling_initializer(),\n                  regularizer=layers.l2_regularizer(0.05), scope='conv2')\n
\n \n

Example of how to reuse an arg_scope:

\n\n
with arg_scope([layers.conv2d], padding='SAME',\n                 initializer=layers.variance_scaling_initializer(),\n                 regularizer=layers.l2_regularizer(0.05)) as sc:\n    net = layers.conv2d(net, 256, [5, 5], scope='conv1')\n    ....\n  with arg_scope(sc):\n    net = layers.conv2d(net, 256, [5, 5], scope='conv2')\n
\n
\n", "system": "" }, { "instruction": "How to add new embeddings for unknown words in Tensorflow (training & pre-set for testing)", "input": "", "output": "

The code example below adapts your embed_tensor function such that words are embedded as follows:

\n\n\n\n
import tensorflow as tf\nimport numpy as np\n\nEMB_DIM = 300\ndef load_pretrained_glove():\n    return [\"a\", \"cat\", \"sat\", \"on\", \"the\", \"mat\"], np.random.rand(6, EMB_DIM)\n\ndef get_train_vocab():\n    return [\"a\", \"dog\", \"sat\", \"on\", \"the\", \"mat\"]\n\ndef embed_tensor(string_tensor, trainable=True):\n  \"\"\"\n  Convert List of strings into list of indices then into 300d vectors\n  \"\"\"\n  # ordered lists of vocab and corresponding (by index) 300d vector\n  pretrained_vocab, pretrained_embs = load_pretrained_glove()\n  train_vocab = get_train_vocab()\n  only_in_train = list(set(train_vocab) - set(pretrained_vocab))\n  vocab = pretrained_vocab + only_in_train\n\n  # Set up tensorflow look up from string word to unique integer\n  vocab_lookup = tf.contrib.lookup.index_table_from_tensor(\n    mapping=tf.constant(vocab),\n    default_value=len(vocab))\n  string_tensor = vocab_lookup.lookup(string_tensor)\n\n  # define the word embedding\n  pretrained_embs = tf.get_variable(\n      name=\"embs_pretrained\",\n      initializer=tf.constant_initializer(np.asarray(pretrained_embs), dtype=tf.float32),\n      shape=pretrained_embs.shape,\n      trainable=trainable)\n  train_embeddings = tf.get_variable(\n      name=\"embs_only_in_train\",\n      shape=[len(only_in_train), EMB_DIM],\n      initializer=tf.random_uniform_initializer(-0.04, 0.04),\n      trainable=trainable)\n  unk_embedding = tf.get_variable(\n      name=\"unk_embedding\",\n      shape=[1, EMB_DIM],\n      initializer=tf.random_uniform_initializer(-0.04, 0.04),\n      trainable=False)\n\n  embeddings = tf.concat([pretrained_embs, train_embeddings, unk_embedding], axis=0)\n\n  return tf.nn.embedding_lookup(embeddings, string_tensor)\n
\n\n

FYI, to have a sensible, non-random representation for words that don't occur in the training data and don't have a pretrained embedding, you could consider mapping words with a low frequency in your training data to an unk token (that is not in your vocabulary) and make the unk_embedding trainable. This way you learn a prototype for words that are unseen in the training data.

\n", "system": "" }, { "instruction": "TensorFlow Object Detection API Weird Behavior", "input": "", "output": "

So I think I figured out what is going on. I did some analysis on the dataset and found out that it is skewed towards objects of category 1.

\n\n

This is the frequency distribution of the each category from 1 to 11 (in 0 based indexing)

\n\n
0 10440\n1 304\n2 998\n3 67\n4 412\n5 114\n6 190\n7 311\n8 195\n9 78\n10 75\n
\n\n

I guess the model is hitting a local minima where just labelling everything as category 1 is good enough.

\n\n

About the problem of not detecting some boxes : I tried training again, but this time I didn't differentiate between brands. Instead, I tried to teach the model what a cigarette box is. It still wasn't detecting all the boxes.

\n\n

Then I decided to crop the input image and provide that as an input. Just to see if the results improve and it did!

\n\n

It turns out that the dimensions of the input image were much larger than the 600 x 1024 that is accepted by the model. So, it was scaling down these images to 600 x 1024 which meant that the cigarette boxes were losing their details :)

\n\n

So, I decided to test the original model which was trained on all classes on cropped images and it works like a charm :)

\n\n

\"Original

\n\n

This was the output of the model on the original image

\n\n

\"Top

\n\n

This is the output of the model when I crop out the top left quarter and provide it as input.

\n\n

Thanks everyone who helped! And congrats to the TensorFlow team for an amazing job for the API :) Now everybody can train object detection models!

\n", "system": "" }, { "instruction": "Running trained tensorflow model in C++", "input": "", "output": "

Instructions for using a graph in C++ can be found here.

\n\n

Here is some code to use your image as input:

\n\n
tensorflow::Tensor keep_prob = tensorflow::Tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape());\nkeep_prob.scalar<float>()() = 1.0;\n\ntensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({1,height,width,depth}));\nauto input_tensor_mapped = input_tensor.tensor<float, 4>();\nconst float * source_data = (float*) img.data;  // here img is an opencv image, but if it's just a float array this code is very easy to adapt\n// copying the image data into the corresponding tensor\nfor (int y = 0; y < height; ++y) {\n    const float* source_row = source_data + (y * width * depth);\n    for (int x = 0; x < width; ++x) {\n        const float* source_pixel = source_row + (x * depth);\n        for (int c = 0; c < depth; ++c) {\n            const float* source_value = source_pixel + c;\n            input_tensor_mapped(0, y, x, c) = *source_value;\n        }\n    }\n}\nstd::vector<tensorflow::Tensor> finalOutput;\n\ntensorflow::Status run_status = this->tf_session->Run({{InputName,input_tensor}, \n                                                       {dropoutPlaceHolderName, keep_prob}},\n                                                      {OutputName},\n                                                      {},\n                                                      &finalOutput);\n
\n", "system": "" }, { "instruction": "can't open tensorboard 0.0.0.0:6006 or localhost:6006", "input": "", "output": "

Refer to tensorflow issue#9701.

\n\n

Run tensorboard --logdir=YOUR_LOG_DIR --host=127.0.0.1 in command prompt,\nand type localhost:6006 in chrome, this works for me (Win10, anaconda4.3.16, python3.5.3, tensorflow1.1.0).

\n", "system": "" }, { "instruction": "TensorFlow Inference", "input": "", "output": "

Alright, this took way too much time to figure out; so here is the answer for the rest of the world.

\n\n

Quick Reminder: I needed to persist a model that can be dynamically loaded and inferred against without knowledge as to the under pinnings or insides of how it works.

\n\n

Step 1: Create a model as a Class and ideally use an interface definition

\n\n
class Vgg3Model:\n\n    NUM_DENSE_NEURONS = 50\n    DENSE_RESHAPE = 32 * (CONSTANTS.IMAGE_SHAPE[0] // 2) * (CONSTANTS.IMAGE_SHAPE[1] // 2)\n\n    def inference(self, images):\n        '''\n        Portion of the compute graph that takes an input and converts it into a Y output\n        '''\n        with tf.variable_scope('Conv1') as scope:\n            C_1_1 = ld.cnn_layer(images, (5, 5, 3, 32), (1, 1, 1, 1), scope, name_postfix='1')\n            C_1_2 = ld.cnn_layer(C_1_1, (5, 5, 32, 32), (1, 1, 1, 1), scope, name_postfix='2')\n            P_1 = ld.pool_layer(C_1_2, (1, 2, 2, 1), (1, 2, 2, 1), scope)\n        with tf.variable_scope('Dense1') as scope:\n            P_1 = tf.reshape(P_1, (-1, self.DENSE_RESHAPE))\n            dim = P_1.get_shape()[1].value\n            D_1 = ld.mlp_layer(P_1, dim, self.NUM_DENSE_NEURONS, scope, act_func=tf.nn.relu)\n        with tf.variable_scope('Dense2') as scope:\n            D_2 = ld.mlp_layer(D_1, self.NUM_DENSE_NEURONS, CONSTANTS.NUM_CLASSES, scope)\n        H = tf.nn.softmax(D_2, name='prediction')\n        return H\n\n    def loss(self, logits, labels):\n        '''\n        Adds Loss to all variables\n        '''\n        cross_entr = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels)\n        cross_entr = tf.reduce_mean(cross_entr)\n        tf.summary.scalar('cost', cross_entr)\n        tf.add_to_collection('losses', cross_entr)\n        return tf.add_n(tf.get_collection('losses'), name='total_loss')\n
\n\n

Step 2: Train your network with whatever inputs you want; in my case I used Queue Runners and TF Records. Note that this step is done by a different team which iterates, builds, designs and optimizes models. This can also change over time. The output they produce must be able to be pulled from a remote location so we can dynamically load the updated models on devices (reflashing hardware is a pain especially if it is geographically distributed). In this instance; the team drops the 3 files associated with a graph saver; but also a pickle of the model used for that training session

\n\n
model = vgg3.Vgg3Model()\n\ndef create_sess_ops():\n    '''\n    Creates and returns operations needed for running\n    a tensorflow training session\n    '''\n    GRAPH = tf.Graph()\n    with GRAPH.as_default():\n        examples, labels = Inputs.read_inputs(CONSTANTS.RecordPaths,\n                                          batch_size=CONSTANTS.BATCH_SIZE,\n                                          img_shape=CONSTANTS.IMAGE_SHAPE,\n                                          num_threads=CONSTANTS.INPUT_PIPELINE_THREADS)\n        examples = tf.reshape(examples, [-1, CONSTANTS.IMAGE_SHAPE[0],\n                                     CONSTANTS.IMAGE_SHAPE[1], CONSTANTS.IMAGE_SHAPE[2]], name='infer/input')\n        logits = model.inference(examples)\n        loss = model.loss(logits, labels)\n        OPTIMIZER = tf.train.AdamOptimizer(CONSTANTS.LEARNING_RATE)\n        gradients = OPTIMIZER.compute_gradients(loss)\n        apply_gradient_op = OPTIMIZER.apply_gradients(gradients)\n        gradients_summary(gradients)\n        summaries_op = tf.summary.merge_all()\n        return [apply_gradient_op, summaries_op, loss, logits], GRAPH\n\ndef main():\n    '''\n    Run and Train CIFAR 10\n    '''\n    print('starting...')\n    ops, GRAPH = create_sess_ops()\n    total_duration = 0.0\n    with tf.Session(graph=GRAPH) as SESSION:\n        COORDINATOR = tf.train.Coordinator()\n        THREADS = tf.train.start_queue_runners(SESSION, COORDINATOR)\n        SESSION.run(tf.global_variables_initializer())\n        SUMMARY_WRITER = tf.summary.FileWriter('Tensorboard/' + CONSTANTS.MODEL_NAME, graph=GRAPH)\n        GRAPH_SAVER = tf.train.Saver()\n\n        for EPOCH in range(CONSTANTS.EPOCHS):\n            duration = 0\n            error = 0.0\n            start_time = time.time()\n            for batch in range(CONSTANTS.MINI_BATCHES):\n                _, summaries, cost_val, prediction = SESSION.run(ops)\n                error += cost_val\n            duration += time.time() - start_time\n            total_duration += duration\n            SUMMARY_WRITER.add_summary(summaries, EPOCH)\n            print('Epoch %d: loss = %.2f (%.3f sec)' % (EPOCH, error, duration))\n            if EPOCH == CONSTANTS.EPOCHS - 1 or error < 0.005:\n                print(\n                'Done training for %d epochs. (%.3f sec)' % (EPOCH, total_duration)\n            )\n                break\n        GRAPH_SAVER.save(SESSION, 'models/' + CONSTANTS.MODEL_NAME + '.model')\n        with open('models/' + CONSTANTS.MODEL_NAME + '.pkl', 'wb') as output:\n            pickle.dump(model, output)\n        COORDINATOR.request_stop()\n        COORDINATOR.join(THREADS)\n
\n\n

Step 3: Run some Inference. Load your pickled model; create a new graph by piping in the new placeholder to the logits; and then call session restore. DO NOT RESTORE THE WHOLE GRAPH; JUST THE VARIABLES.

\n\n
MODEL_PATH = 'models/' + CONSTANTS.MODEL_NAME + '.model'\nimgs_bsdir = 'C:/data/cifar_10/train/'\n\nimages = tf.placeholder(tf.float32, shape=(1, 32, 32, 3))\nwith open('models/vgg3.pkl', 'rb') as model_in:\nmodel = pickle.load(model_in)\nlogits = model.inference(images)\n\ndef run_inference():\n    '''Runs inference against a loaded model'''\n    with tf.Session() as sess:\n        sess.run(tf.global_variables_initializer())\n        new_saver = tf.train.Saver()\n        new_saver.restore(sess, MODEL_PATH)\n        print(\"Starting...\")\n        for i in range(20, 30):\n            print(str(i) + '.png')\n            img = misc.imread(imgs_bsdir + str(i) + '.png').astype(np.float32) / 255.0\n            img = img.reshape(1, 32, 32, 3)\n            pred = sess.run(logits, feed_dict={images : img})\n            max_node = np.argmax(pred)\n            print('predicted label: ' + str(max_node))\n        print('done')\n\nrun_inference()\n
\n\n

There definitely ways to improve on this using interfaces and maybe packaging up everything better; but this is working and sets the stage for how we will be moving forward.

\n\n

FINAL NOTE When we finally pushed this to production, we ended up having to ship the stupid `mymodel_model.py file down with everything to build up the graph. So we now enforce a naming convention for all models and there is also a coding standard for production model runs so we can do this properly.

\n\n

Good Luck!

\n", "system": "" }, { "instruction": "Keras - class_weight vs sample_weights in the fit_generator", "input": "", "output": "
\n

My understanding is that the class_weight regards the weights of all\n classes for the entire dataset whereas the sample_weights regards the\n weights of all classes for each individual chunk created by the\n generator. Is that correct? If not, can someone elaborate on the\n matter?

\n
\n\n

class_weight affects the relative weight of each class in the calculation of the objective function. sample_weights, as the name suggests, allows further control of the relative weight of samples that belong to the same class.

\n\n
\n

Is it necessary to give both the class_weight to the fit_generator and\n then the sample_weights as an output for each chunk? If yes, then why?\n If not then which one is better to give?

\n
\n\n

It depends on your application. Class weights are useful when training on highly skewed data sets; for example, a classifier to detect fraudulent transactions. Sample weights are useful when you don't have equal confidence in the samples in your batch. A common example is performing regression on measurements with variable uncertainty.

\n\n
\n

If I should give the sample_weights for each chunk, how do I map the\n weights if some of the classes are missing from a specific chunk? Let\n me give an example. In my overall dataset, I have 7 possible classes\n (labels). Because these classes are highly imbalanced, when I create\n smaller chunks of data as an output from the fit_generator, some of\n the classes are missing from the specific chunk. How should I create\n the sample_weights for these chunks?

\n
\n\n

This is not an issue. sample_weights is defined on a per-sample basis and is independent from the class. For this reason, the documentation states that (inputs, targets, sample_weights) should be the same length.

\n\n

The function _weighted_masked_objective in engine/training.py has an example of sample_weights are being applied.

\n", "system": "" }, { "instruction": "Installation of TensorFlow on windows 7 - 'pip3' is not recognized as an internal or external command,", "input": "", "output": "

Run the following

\n\n
python -m pip install --upgrade tensorflow\n
\n\n

Assuming python is working, TensorFlow should get installed (at least the \"Validate the installation\" step is green).

\n", "system": "" }, { "instruction": "Tensorflow OOM on GPU", "input": "", "output": "

I resolve this issue by reducing batch_size=52\nOnly to reduce memory use is to reduce batch_size.

\n\n
\n

Batch_size depends on your gpu graphics card, size of VRAM, Cache memory etc.

\n
\n\n

Please prefer this Another Stack Overflow Link

\n", "system": "" }, { "instruction": "How to extract bias weights in Keras sequential model?", "input": "", "output": "

get_weights() for a Dense layer returns a list of two elements, the first element contains the weights, and the second element contains the biases. So you can simply do:

\n\n
weights = model.layers[0].get_weights()[0]\nbiases = model.layers[0].get_weights()[1]\n
\n\n

Note that weights and biases are already numpy arrays.

\n", "system": "" }, { "instruction": "TensorFlow - Pad unknown size tensor to a specific size?", "input": "", "output": "

Yes. There is. Provided you do not need to change the rank of the tensor, it's very simple.

\n

tf.pad() accepts regular python lists with tensors. The format of the padding is a list of pairs of how much to pad on each side of that dimension.

\n

e.g.

\n
t = tf.constant([[1, 2], [3, 4]])\npaddings = [[0, 0], [0, 4-tf.shape(t)[0]]]\nout = tf.pad(t, paddings, 'CONSTANT', constant_values=-1)\nsess.run(out)\n# gives: \n# array([[ 1,  2, -1, -1],\n#       [ 3,  4, -1, -1]], dtype=int32)\n
\n
\n

If you want to generalise this to a useful function, you could do something like:

\n
def pad_up_to(t, max_in_dims, constant_values):\n    diff = max_in_dims - tf.shape(t)\n    paddings = tf.pad(diff[:, None], [[0, 0], [1, 0]])\n    return tf.pad(t, paddings, 'CONSTANT', constant_values=constant_values)\n# (note: see edits for the solution referred to by other answers on this question)\n
\n

where max_in_dims is essentially the desired shape of the output. Note: this function will fail if you provide a shape that is strictly smaller than t in any dimension.

\n

You can use it like:

\n
t = tf.constant([[1, 2], [3, 4]]) # shape = [2, 2]\nt_padded = pad_up_to(t, [2, 4], -1) # shape = [2, 4], padded with -1s\n
\n

or

\n
t = tf.placeholder(tf.float32, [None, None]) # shape = [?, ?]\nt_padded = pad_up_to(t, [5,5], -1) # shape = [5, 5], padded with -1s\nt_np = np.random.uniform(0, 1, [3,4]) # shape = [3,4], no padding\nt_padded_out = sess.run(t_padded, {t: t_np})\nt_np2 = np.random.uniform(0, 1, [2,1]) # shape = [2,1], no padding\nt_padded_out2 = sess.run(t_padded, {t: t_np2})\n
\n

Although the dimension sizes are calculated dynamically, the number of dimensions is not, so make sure that max_in_dims has the same number of elements as t.shape.

\n", "system": "" }, { "instruction": "Distributed tensorflow: the difference between In-graph replication and Between-graph replication", "input": "", "output": "

First of all, for some historical context, \"in-graph replication\" is the first approach that we tried in TensorFlow, and it did not achieve the performance that many users required, so the more complicated \"between-graph\" approach is the current recommended way to perform distributed training. Higher-level libraries such as tf.learn use the \"between-graph\" approach for distributed training.

\n\n

To answer your specific questions:

\n\n
    \n
  1. \n

    Does this mean there are multiple tf.Graphs in the between-graph\n replication approach? If yes, where are the corresponding codes in the provided examples?

    \n
    \n\n

    Yes. The typical between-graph replication setup will use a separate TensorFlow process for each worker replica, and each of this will build a separate tf.Graph for the model. Usually each process uses the global default graph (accessible through tf.get_default_graph()) and it is not created explicitly.

    \n\n

    (In principle, you could use a single TensorFlow process with the same tf.Graph and multiple tf.Session objects that share the same underlying graph, as long as you configured the tf.ConfigProto.device_filters option for each session differently, but this is an uncommon setup.)

  2. \n
  3. \n

    While there is already a between-graph replication example in above link, could anyone provide an in-graph replication implementation (pseudocode is fine) and highlight its main differences from between-graph replication?

    \n
    \n\n

    For historical reasons, there are not many examples of in-graph replication (Yaroslav's gist is one exception). A program using in-graph replication will typically include a loop that creates the same graph structure for each worker (e.g. the loop on line 74 of the gist), and use variable sharing between the workers.

    \n\n

    The one place where in-graph replication persists is for using multiple devices in a single process (e.g. multiple GPUs). The CIFAR-10 example model for multiple GPUs is an example of this pattern (see the loop over GPU devices here).

  4. \n
\n\n

(In my opinion, the inconsistency between how multiple workers and multiple devices in a single worker are treated is unfortunate. In-graph replication is simpler to understand than between-graph replication, because it doesn't rely on implicit sharing between the replicas. Higher-level libraries, such as tf.learn and TF-Slim, hide some of these issues, and offer hope that we can offer a better replication scheme in the future.)

\n\n
    \n
  1. \n

    Why do we say each client builds a similar graph, but not the same graph?

    \n
    \n\n

    Because they aren't required to be identical (and there is no integrity check that enforces this). In particular, each worker might create a graph with different explicit device assignments (\"/job:worker/task:0\", \"/job:worker/task:1\", etc.). The chief worker might create additional operations that are not created on (or used by) the non-chief workers. However, in most cases, the graphs are logically (i.e. modulo device assignments) the same.

    \n\n
    \n

    Shouldn't it be multiple copies of the compute-intensive part of the model, since we have multiple workers?

    \n
    \n\n

    Typically, each worker has a separate graph that contains a single copy of the compute-intensive part of the model. The graph for worker i does not contain the nodes for worker j (assuming i ≠ j). (An exception would be the case where you're using between-graph replication for distributed training, and in-graph replication for using multiple GPUs in each worker. In that case, the graph for a worker would typically contain N copies of the compute-intensive part of the graph, where N is the number of GPUs in that worker.)

  2. \n
  3. \n

    Does the example in Replicated training support training on multiple machines, each of which has multiple GPUs?

    \n
    \n\n

    The example code only covers training on multiple machines, and says nothing about how to train on multiple GPUs in each machine. However, the techniques compose easily. In this part of the example:

    \n\n
    # Build model...\nloss = ...\n
    \n\n

    ...you could add a loop over the GPUs in the local machine, to achieve distributed training multiple workers each with multiple GPUs.

  4. \n
\n", "system": "" }, { "instruction": "TensorFlow: how to log GPU memory (VRAM) utilization?", "input": "", "output": "

Update, can use TensorFlow ops to query allocator:

\n\n
# maximum across all sessions and .run calls so far\nsess.run(tf.contrib.memory_stats.MaxBytesInUse())\n# current usage\nsess.run(tf.contrib.memory_stats.BytesInUse())\n
\n\n

Also you can get detailed information about session.run call including all memory being allocations during run call by looking at RunMetadata. IE something like this

\n\n
run_metadata = tf.RunMetadata()\nsess.run(c, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE, output_partition_graphs=True), run_metadata=run_metadata)\n
\n\n

Here's an end-to-end example -- take column vector, row vector and add them to get a matrix of additions:

\n\n
import tensorflow as tf\n\nno_opt = tf.OptimizerOptions(opt_level=tf.OptimizerOptions.L0,\n                             do_common_subexpression_elimination=False,\n                             do_function_inlining=False,\n                             do_constant_folding=False)\nconfig = tf.ConfigProto(graph_options=tf.GraphOptions(optimizer_options=no_opt),\n                        log_device_placement=True, allow_soft_placement=False,\n                        device_count={\"CPU\": 3},\n                        inter_op_parallelism_threads=3,\n                        intra_op_parallelism_threads=1)\nsess = tf.Session(config=config)\n\nwith tf.device(\"cpu:0\"):\n    a = tf.ones((13, 1))\nwith tf.device(\"cpu:1\"):\n    b = tf.ones((1, 13))\nwith tf.device(\"cpu:2\"):\n    c = a+b\n\nsess = tf.Session(config=config)\nrun_metadata = tf.RunMetadata()\nsess.run(c, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE, output_partition_graphs=True), run_metadata=run_metadata)\nwith open(\"/tmp/run2.txt\", \"w\") as out:\n  out.write(str(run_metadata))\n
\n\n

If you open run.txt you'll see messages like this:

\n\n
  node_name: \"ones\"\n\n      allocation_description {\n        requested_bytes: 52\n        allocator_name: \"cpu\"\n        ptr: 4322108320\n      }\n  ....\n\n  node_name: \"ones_1\"\n\n      allocation_description {\n        requested_bytes: 52\n        allocator_name: \"cpu\"\n        ptr: 4322092992\n      }\n  ...\n  node_name: \"add\"\n      allocation_description {\n        requested_bytes: 676\n        allocator_name: \"cpu\"\n        ptr: 4492163840\n
\n\n

So here you can see that a and b allocated 52 bytes each (13*4), and the result allocated 676 bytes.

\n", "system": "" }, { "instruction": "tf.SequenceExample with multidimensional arrays", "input": "", "output": "

I had the same problem. I think that it is entirely solveable, but you have to decide on the output format, and then figure out how you're going to use it.

\n\n

First what is your error?

\n\n

The error message is telling you that what you are trying to read doesn't fit into the feature size that you specified. So where did you specify it? Right here:

\n\n
sequence_features = {\n    \"input_characters\": tf.FixedLenSequenceFeature([], dtype=tf.int64),\n    \"output_characters\": tf.FixedLenSequenceFeature([], dtype=tf.int64)\n}\n
\n\n

This says \"my input_characters is a sequence of single values\", but this is not true; what you have is a sequence of sequences of single values and hence an error.

\n\n

Second what can you do?

\n\n

If you instead use:

\n\n
a = [[1,2,3], [2,3,1], [3,2,1]] \nsequence_features = {\n    \"input_characters\": tf.FixedLenSequenceFeature([3], dtype=tf.int64),\n    \"output_characters\": tf.FixedLenSequenceFeature([3], dtype=tf.int64)\n}\n
\n\n

You will not have an error with your code because you have specified that each element of the top level sequence is 3 elements long.

\n\n

Alternatively, if you do not have fixed length sequences, then you're going to have to use a different type of feature.

\n\n
sequence_features = {\n    \"input_characters\": tf.VarLenFeature(tf.int64),\n    \"output_characters\": tf.VarLenFeature(tf.int64)\n}\n
\n\n

The VarLenFeature tells it that the length is unknown before reading. Unfortunately this means that your input_characters can no longer be read as a dense vector in one step. Instead, it will be a SparseTensor by default. You can turn this into a dense tensor with tf.sparse_tensor_to_dense eg:

\n\n
input_densified = tf.sparse_tensor_to_dense(sequence_parsed['input_characters'])\n
\n\n

As mentioned in the article that you've been looking at, if your data does not always have the same length you will have to have a \"not_really_a_word\" word in your vocabulary, which you use as the default index. e.g. let's say you have index 0 mapping to the \"not_really_a_word\" word, then using your

\n\n
a = [[1,2,3],  [2,3],  [3,2,1]]\n
\n\n

python list will end up being a

\n\n
array((1,2,3),  (2,3,0),  (3,2,1))\n
\n\n

tensor.

\n\n

Be warned; I'm not certain that back-propagation \"just works\" for SparseTensors, like it does for dense tensors. The wildml article talks about padding 0s per sequence masking the loss for the \"not_actually_a_word\" word (see: \"SIDE NOTE: BE CAREFUL WITH 0\u2019S IN YOUR VOCABULARY/CLASSES\" in their article). This seems to suggest that the first method will be easier to implement.

\n\n

Note that this is different to the case described here where each example is a sequence of sequences. To my understanding, the reason this kind of method is not well supported is because it is an abuse of the case that this is meant to support; loading fixed-size embeddings directly.

\n\n
\n\n

I will assume that the very next thing you want to do is to turn those numbers into word embeddings. You can turn a list of indices into a list of embeddings with tf.nn.embedding_lookup

\n", "system": "" }, { "instruction": "TensorFlow: AttributeError: 'Tensor' object has no attribute 'shape'", "input": "", "output": "

UPDATE: Since TensorFlow 1.0, tf.Tensor now has a tf.Tensor.shape property, which returns the same value as tf.Tensor.get_shape().

\n\n
\n\n

Indeed, in versions prior to TensorFlow 1.0 tf.Tensor doesn't have a .shape property. You should use the Tensor.get_shape() method instead:

\n\n
train_data = tf.reshape(train_data, [400, 1])\nprint \"train_data.shape: \" + str(train_data.get_shape())\n
\n\n

Note that in general you might not be able to get the actual shape of the result of a TensorFlow operation. In some cases, the shape will be a computed value that depends on running the computation to find its value; and it may even vary from one run to the next (e.g. the shape of tf.unique()). In that case, the result of get_shape() for some dimensions may be None (or \"?\").

\n", "system": "" }, { "instruction": "Limit Tensorflow CPU and Memory usage", "input": "", "output": "

This will create a session that runs one op at a time, and only one thread per op

\n\n
sess = tf.Session(config=\n    tf.ConfigProto(inter_op_parallelism_threads=1,\n                   intra_op_parallelism_threads=1))\n
\n\n

Not sure about limiting memory, it seems to be allocated on demand, I've had TensorFlow freeze my machine when my network wanted 100GB of RAM, so my solution was to make networks that need less RAM

\n", "system": "" }, { "instruction": "speed benchmark for testing tensorflow install", "input": "", "output": "

Try tensorflow/tensorflow/models/image/mnist/convolutional.py, that'll print per-step timing.

\n\n

On Tesla K40c that should get about 16 ms per step, while about 120 ms for CPU-only on my 3 year old machine

\n\n
\n\n

Edit: This moved to the models repositories: https://github.com/tensorflow/models/blob/master/tutorials/image/mnist/convolutional.py.

\n\n

The convolutional.py file is now at models/tutorials/image/mnist/convolutional.py

\n", "system": "" }, { "instruction": "Dynamic size for tf.zeros() (for use with placeholders with None dimensions)", "input": "", "output": "

The recommended way to make a zero tensor with the same shape as another tensor is to use the tf.zeros_like() op:

\n\n
x = tf.placeholder(tf.float32, shape=[None, 4])\ny = tf.zeros_like(x)\n
\n\n

The resulting tensor y appears to have the shape [None, None] according to Tensor.get_shape(), but at runtime it will expand to the same shape as x:

\n\n
print y.get_shape()\n# ==> TensorShape([Dimension(None), Dimension(None)])\n\nsess = tf.Session()\ny_result = sess.run(y, feed_dict={x: np.random.rand(4, 4)})\n\nprint y_result.shape\n# ==> (4, 4)\n
\n\n

The [None, None] static shape is returned because shape inference hasn't been specialized for tf.zeros_like(). I've filed a GitHub issue for that and it should be fixed soon.

\n\n
\n\n

EDIT: In your comment, you asked how to deal with the case where the zero tensor had a shape based on, but different from the original tensor. This is also possible, using tf.shape() and tf.stack() to build the dimensions, and tf.fill() to produce the zero tensor:

\n\n
x = tf.placeholder(tf.float32, shape=[None, 4])\n\n# Use tf.shape() to get the runtime size of `x` in the 0th dimension.\nzeros_dims = tf.stack([tf.shape(x)[0], 7])\n\ny = tf.fill(zeros_dims, 0.0)\n\nsess = tf.Session()\ny_result = sess.run(y, feed_dict={x: np.random.rand(4, 4)})\nprint y_result.shape\n# ==> (4, 7)\n
\n", "system": "" }, { "instruction": "Extract features using pre-trained (Tensorflow) CNN", "input": "", "output": "

The TensorFlow team recently released a deep CNN trained on the ImageNet dataset. You can download the script that fetches the data (including the model graph and the trained weights) from here. The associated Image Recognition tutorial has more details about the model.

\n\n

While the current model isn't specifically packaged to be used in a subsequent training step, you could explore modifying the script to reuse parts of the model and the trained weights in your own network.

\n", "system": "" }, { "instruction": "TensorFlow random_shuffle_queue is closed and has insufficient elements", "input": "", "output": "

I had a similar problem. Digging around the web, it turned out that if you use some num_epochs argument, you have to initialize all the local variables, so your code should end up looking like:

\n\n
with tf.Session() as sess:\n    sess.run(tf.local_variables_initializer())\n    sess.run(tf.global_variables_initializer())\n    coord = tf.train.Coordinator()\n    threads = tf.train.start_queue_runners(coord=coord)\n\n    # do your stuff here\n\n    coord.request_stop()\n    coord.join(threads)\n
\n\n

If you post some more code, maybe I could take a deeper look into it. In the meantime, HTH.

\n", "system": "" }, { "instruction": "How can I change the shape of a variable in TensorFlow?", "input": "", "output": "

The tf.Variable class is the recommended way to create variables, but it restricts your ability to change the shape of the variable once it has been created.

\n\n

If you need to change the shape of a variable, you can do the following (e.g. for a 32-bit floating point tensor):

\n\n
var = tf.Variable(tf.placeholder(tf.float32))\n# ...\nnew_value = ...  # Tensor or numpy array.\nchange_shape_op = tf.assign(var, new_value, validate_shape=False)\n# ...\nsess.run(change_shape_op)  # Changes the shape of `var` to new_value's shape.\n
\n\n

Note that this feature is not in the documented public API, so it is subject to change. If you do find yourself needing to use this feature, let us know, and we can investigate a way to support it moving forward.

\n", "system": "" }, { "instruction": "The minimum required Cuda capability is 3.5", "input": "", "output": "

There is a section in the official installation page that guides you to enable Cuda 3, but you need to build Tensorflow from source.

\n\n
$ TF_UNOFFICIAL_SETTING=1 ./configure\n\n# Same as the official settings above\n\nWARNING: You are configuring unofficial settings in TensorFlow. Because some\nexternal libraries are not backward compatible, these settings are largely\nuntested and unsupported.\n\nPlease specify a list of comma-separated Cuda compute capabilities you want to\nbuild with. You can find the compute capability of your device at:\nhttps://developer.nvidia.com/cuda-gpus.\nPlease note that each additional compute capability significantly increases\nyour build time and binary size. [Default is: \"3.5,5.2\"]: 3.0\n\nSetting up Cuda include\nSetting up Cuda lib64\nSetting up Cuda bin\nSetting up Cuda nvvm\nConfiguration finished\n
\n", "system": "" }, { "instruction": "How to install Python 3.8 along with Python 3.9 in Arch Linux?", "input": "", "output": "

Go for package python38 in AUR, if you have an AUR helper like yay just use yay -S python38. Otherwise, just download the PKGBUILD and install manually with makepkg.

\n

You can also update python with pacman -Syu (which is now python3.9). Then the two shall live together, inside /usr/bin/python3.x.

\n

Use virtual environment to manage them if you like, virtualenv --python=/usr/bin/python3.x yourenvname.

\n", "system": "" }, { "instruction": "Use GPU with opencv-python", "input": "", "output": "

The problem here is that version of opencv distributed with your system (Windows in this case) was not compiled with Cuda support. Therefore, you cannot use any cuda related function with this build.

\n

If you want to have an opencv with cuda support, you will have to either compile it yourself (which may be tedious on windows) or find a prebuilt one somewhere. In case you want to go for the 1st solution, here is a link that may help you with the process: https://programming.vip/docs/compile-opencv-with-cuda-support-on-windows-10.html. Keep in mind that this will require you to install a bunch of SDK in the process.

\n", "system": "" }, { "instruction": "using cuDNN kernel for LSTM", "input": "", "output": "

I ran into the same problem and fixed it by manually setting the options to use the cuDNN-compatible implementation as specified here.

\n

"Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the CuDNN kernel (see below for details), the layer will use a fast cuDNN implementation."

\n

The requirements to use the cuDNN implementation are:

\n
    \n
  1. activation == tanh
  2. \n
  3. recurrent_activation == sigmoid
  4. \n
  5. recurrent_dropout == 0
  6. \n
  7. unroll is False
  8. \n
  9. use_bias is True
  10. \n
  11. Inputs, if use masking, are strictly right-padded.
  12. \n
  13. Eager execution is enabled in the outermost\ncontext.
  14. \n
\n

In particular, I had to specify recurrent_activation == sigmoid. The version of Keras/TF I installed had a default of recurrent_activation == hard_sigmoid.

\n", "system": "" }, { "instruction": "Deploy python app to Heroku "Slug Size too large"", "input": "", "output": "

I have already answered this here.

\n

Turns out the Tensorflow 2.0 module is very large (more than 500MB, the limit for Heroku) because of its GPU support. Since Heroku doesn't support GPU, it doesn't make sense to install the module with GPU support.

\n

Solution:

\n

Simply replace tensorflow with tensorflow-cpu in your requirements.

\n

This worked for me, hope it works for you too!

\n", "system": "" }, { "instruction": "from_logits=True and from_logits=False get different training result for tf.losses.CategoricalCrossentropy for UNet", "input": "", "output": "

Pushing the \"softmax\" activation into the cross-entropy loss layer significantly simplifies the loss computation and makes it more numerically stable.
\nIt might be the case that in your example the numerical issues are significant enough to render the training process ineffective for the from_logits=False option.

\n\n

You can find a derivation of the cross entropy loss (a special case of \"info gain\" loss) in this post. This derivation illustrates the numerical issues that are averted when combining softmax with cross entropy loss.

\n", "system": "" }, { "instruction": "How to fix \u2018RuntimeError: The Session graph is empty. Add operations to the graph before calling run().\u201d", "input": "", "output": "

TF 2.0 supports eager execution which means you don't have to explicitly create a session and run the code in it. So the simplest solution would be:

\n\n
import tensorflow as tf\nprint(tf.__version__)\n\n# Build a dataflow graph.\nc = tf.constant([[1.0, 2.0], [3.0, 4.0]])\nd = tf.constant([[1.0, 1.0], [0.0, 1.0]])\ne = tf.matmul(c, d)\n\nprint(e)\n
\n\n

which outputs

\n\n
2.0.0-beta1\ntf.Tensor(\n[[1. 3.]\n [3. 7.]], shape=(2, 2), dtype=float32)\n
\n\n

But you can use the session if you want:

\n\n
import tensorflow as tf\nprint(tf.__version__)\n\n# Construct a `Session` to execute the graph.\nwith tf.compat.v1.Session() as sess:\n\n  # Build a dataflow graph.\n  c = tf.constant([[1.0, 2.0], [3.0, 4.0]])\n  d = tf.constant([[1.0, 1.0], [0.0, 1.0]])\n  e = tf.matmul(c, d)\n\n  # Execute the graph and store the value that `e` represents in `result`.\n  result = sess.run(e)\n  print(result)\n
\n\n

which gives

\n\n
2.0.0-beta1\n[[1. 3.]\n [3. 7.]]\n
\n", "system": "" }, { "instruction": "What has to be inside tf.distribute.Strategy.scope()?", "input": "", "output": "

According to my experiments, the only thing that needs to be declared inside is model creation. If you use use Keras .fit() instead of custom training then model.compile() has to be inside as well.

\n\n

You can do something like this:

\n\n
def create_model():\n    \"\"\" This can be outside of the scope\n    \"\"\"\n    ...\n    return model\n\nwith strategy.scope():\n    model = create_model()\n
\n\n

If you use tf.train.Checkpoint then make sure both its instantiation and the call of checkpoint.resume() are inside the scope.

\n", "system": "" }, { "instruction": "expected ndim=3, found ndim=2", "input": "", "output": "

LSTM layer expects inputs to have shape of (batch_size, timesteps, input_dim). In keras you need to pass (timesteps, input_dim) for input_shape argument. But you are setting input_shape (9,). This shape does not include timesteps dimension. The problem can be solved by adding extra dimension to input_shape for time dimension. E.g adding extra dimension with value 1 could be simple solution. For this you have to reshape input dataset( X Train) and Y shape. But this might be problematic because the time resolution is 1 and you are feeding length one sequence. With length one sequence as input, using LSTM does not seem the right option.

\n
x_train = x_train.reshape(-1, 1, 9)\nx_test  = x_test.reshape(-1, 1, 9)\ny_train = y_train.reshape(-1, 1, 5)\ny_test = y_test.reshape(-1, 1, 5)\n\nmodel = Sequential()\nmodel.add(LSTM(100, input_shape=(1, 9), return_sequences=True))\nmodel.add(LSTM(5, input_shape=(1, 9), return_sequences=True))\nmodel.compile(loss="mean_absolute_error", optimizer="adam", metrics= ['accuracy'])\n\nhistory = model.fit(X_train,y_train,epochs=100, validation_data=(X_test,y_test))\n
\n", "system": "" }, { "instruction": "Is there a way to stack two tensorflow datasets?", "input": "", "output": "

The tf.data.Dataset.concatenate() method is the closest analog of tf.stack() when working with datasets. If you have two datasets with the same structure (i.e. same types for each component, but possibly different shapes):

\n
dataset_1 = tf.data.Dataset.range(10, 20)\ndataset_2 = tf.data.Dataset.range(60, 70)\n
\n

then you can concatenate them as follows:

\n
combined_dataset = dataset_1.concatenate(dataset_2)\n
\n", "system": "" }, { "instruction": "How to install TensorFlow-gpu with cuda8.0?", "input": "", "output": "

You'll need to install the version 1.4.1 for CUDA-8 as

\n\n
pip install tensorflow-gpu==1.4.1\n
\n\n

The latest (version 1.5) is for CUDA-9

\n", "system": "" }, { "instruction": "Why training speed does not scale with the batch size?", "input": "", "output": "

It's often wrongly mentioned that batch learning is as fast or faster than on-line training. In fact, batch-learning is changing the weights once, the complete set of data (the batch) has been presented to the network. Therefore, the weight update frequency is rather slow. This explains why the processing speed in your measurements acts like you observed.

\n\n

To get a further understanding for the training techniques, have a look at the 2003 paper The general inefficiency of batch training for gradient descent learning. It deals with the comparison of batch and on-line learning.

\n\n

Edit:

\n\n

Regarding your comment:

\n\n

I don't think there happens a model or data parallelization on one single GPU. The GPU parallelizes the vector and matrix operations that are involved in the training algorithm, but the batch learning algorithm is still computed as follows:

\n\n
loop maxEpochs times\n  for each training item\n    compute weights and bias deltas for curr item\n    accumulate the deltas\n  end for\n  adjust weights and bias deltas using accumulated deltas\nend loop\n
\n\n

As you can see, although the weight adjustment is only applied once for the whole batch, the weight and bias deltas still have to be computed for every element in the batch. Therefore there is IMHO no large performance advantage of the batch learning algorithm compared to the on-line learning.

\n", "system": "" }, { "instruction": "Early stopping with tf.estimator, how?", "input": "", "output": "

Good news! tf.estimator now has early stopping support on master and it looks like it will be in 1.10.

\n\n
estimator = tf.estimator.Estimator(model_fn, model_dir)\n\nos.makedirs(estimator.eval_dir())  # TODO This should not be expected IMO.\n\nearly_stopping = tf.contrib.estimator.stop_if_no_decrease_hook(\n    estimator,\n    metric_name='loss',\n    max_steps_without_decrease=1000,\n    min_steps=100)\n\ntf.estimator.train_and_evaluate(\n    estimator,\n    train_spec=tf.estimator.TrainSpec(train_input_fn, hooks=[early_stopping]),\n    eval_spec=tf.estimator.EvalSpec(eval_input_fn))\n
\n", "system": "" }, { "instruction": "Attention Layer throwing TypeError: Permute layer does not support masking in Keras", "input": "", "output": "

I am one of the authors of the package.

\n

You should use the latest version.\nThe previous versions had some conceptual problems.

\n", "system": "" }, { "instruction": "How to optimize for inference a simple, saved TensorFlow 1.0.1 graph?", "input": "", "output": "

Here is the detailed guide on how to optimize for inference:

\n\n

The optimize_for_inference module takes a frozen binary GraphDef file as input and outputs the optimized Graph Def file which you can use for inference. And to get the frozen binary GraphDef file you need to use the module freeze_graph which takes a GraphDef proto, a SaverDef proto and a set of variables stored in a checkpoint file. The steps to achieve that is given below:

\n\n

1. Saving tensorflow graph

\n\n
 # make and save a simple graph\n G = tf.Graph()\n with G.as_default():\n   x = tf.placeholder(dtype=tf.float32, shape=(), name=\"x\")\n   a = tf.Variable(5.0, name=\"a\")\n   y = tf.add(a, x, name=\"y\")\n   saver = tf.train.Saver()\n\nwith tf.Session(graph=G) as sess:\n   sess.run(tf.global_variables_initializer())\n   out = sess.run(fetches=[y], feed_dict={x: 1.0})\n\n  # Save GraphDef\n  tf.train.write_graph(sess.graph_def,'.','graph.pb')\n  # Save checkpoint\n  saver.save(sess=sess, save_path=\"test_model\")\n
\n\n

2. Freeze graph

\n\n
python -m tensorflow.python.tools.freeze_graph --input_graph graph.pb --input_checkpoint test_model --output_graph graph_frozen.pb --output_node_names=y\n
\n\n

3. Optimize for inference

\n\n
python -m tensorflow.python.tools.optimize_for_inference --input graph_frozen.pb --output graph_optimized.pb --input_names=x --output_names=y\n
\n\n

4. Using Optimized graph

\n\n
with tf.gfile.GFile('graph_optimized.pb', 'rb') as f:\n   graph_def_optimized = tf.GraphDef()\n   graph_def_optimized.ParseFromString(f.read())\n\nG = tf.Graph()\n\nwith tf.Session(graph=G) as sess:\n    y, = tf.import_graph_def(graph_def_optimized, return_elements=['y:0'])\n    print('Operations in Optimized Graph:')\n    print([op.name for op in G.get_operations()])\n    x = G.get_tensor_by_name('import/x:0')\n    out = sess.run(y, feed_dict={x: 1.0})\n    print(out)\n\n#Output\n#Operations in Optimized Graph:\n#['import/x', 'import/a', 'import/y']\n#6.0\n
\n\n

5. For multiple output names

\n\n

If there are multiple output nodes, then specify : output_node_names = 'boxes, scores, classes' and import graph by,

\n\n
 boxes,scores,classes, = tf.import_graph_def(graph_def_optimized, return_elements=['boxes:0', 'scores:0', 'classes:0'])\n
\n", "system": "" }, { "instruction": "Loading folders of images in tensorflow", "input": "", "output": "

The tf.data API (tensorflow 1.4 onwards) is great for things like this. The pipeline will looks something like the following:

\n\n\n\n

There are a number of ways of creating your initial dataset (see here for a more in depth answer)

\n\n

TFRecords with Tensorflow Datasets

\n\n

Supporting tensorflow version 1.12 onwards, Tensorflow datasets provides a relatively straight-forward API for creating tfrecord datasets, and also handles data downloading, sharding, statistics generation and other functionality automatically.

\n\n

See e.g. this image classification dataset implementation. There's a lot of bookeeping stuff in there (download urls, citations etc), but the technical part boils down to specifying features and writing a _generate_examples function

\n\n
features = tfds.features.FeaturesDict({\n            \"image\": tfds.features.Image(shape=(_TILES_SIZE,) * 2 + (3,)),\n            \"label\": tfds.features.ClassLabel(\n                names=_CLASS_NAMES),\n            \"filename\": tfds.features.Text(),\n        })\n\n...\n\ndef _generate_examples(self, root_dir):\n  root_dir = os.path.join(root_dir, _TILES_SUBDIR)\n  for i, class_name in enumerate(_CLASS_NAMES):\n    class_dir = os.path.join(root_dir, _class_subdir(i, class_name))\n    fns = tf.io.gfile.listdir(class_dir)\n\n    for fn in sorted(fns):\n      image = _load_tif(os.path.join(class_dir, fn))\n      yield {\n          \"image\": image,\n          \"label\": class_name,\n          \"filename\": fn,\n      }\n
\n\n
\n\n

You can also generate the tfrecords using lower level operations.

\n\n

Load images via tf.data.Dataset.map and tf.py_func(tion)

\n\n

Alternatively you can load the image files from filenames inside tf.data.Dataset.map as below.

\n\n
image_paths, labels = load_base_data(...)\nepoch_size = len(image_paths)\nimage_paths = tf.convert_to_tensor(image_paths, dtype=tf.string)\nlabels = tf.convert_to_tensor(labels)\n\ndataset = tf.data.Dataset.from_tensor_slices((image_paths, labels))\n\nif mode == 'train':\n    dataset = dataset.repeat().shuffle(epoch_size)\n\n\ndef map_fn(path, label):\n    # path/label represent values for a single example\n    image = tf.image.decode_jpeg(tf.read_file(path))\n\n    # some mapping to constant size - be careful with distorting aspec ratios\n    image = tf.image.resize_images(out_shape)\n    # color normalization - just an example\n    image = tf.to_float(image) * (2. / 255) - 1\n    return image, label\n\n\n# num_parallel_calls > 1 induces intra-batch shuffling\ndataset = dataset.map(map_fn, num_parallel_calls=8)\ndataset = dataset.batch(batch_size)\n# try one of the following\ndataset = dataset.prefetch(1)\n# dataset = dataset.apply(\n#            tf.contrib.data.prefetch_to_device('/gpu:0'))\n\nimages, labels = dataset.make_one_shot_iterator().get_next()\n
\n\n

I've never worked in a distributed environment, but I've never noticed a performance hit from using this approach over tfrecords. If you need more custom loading functions, also check out tf.py_func.

\n\n

More general information here, and notes on performance here

\n", "system": "" }, { "instruction": "Unable to Install Tensorflow (MemoryError)", "input": "", "output": "

Try installing without caching: pip install --no-cache-dir tensorflow.

\n", "system": "" }, { "instruction": "Scalable, Efficient Hierarchical Softmax in Tensorflow?", "input": "", "output": "

You mention that you want GPU-class performance:

\n\n
\n

but now keeps everything on the CPU and slows things down quite a bit

\n
\n\n

and wish to use 300-unit hidden size and 10M-word dictionaries.

\n\n

This means that (assuming float32), you'll need 4 * 300 * 10M * 2 bytes = 24 GB just to store the parameters and the gradient for the output layer.

\n\n

Hierarchical Softmax (HSM) doesn't reduce the memory requirements - it just speeds up the training.

\n\n

Realistically, you'll need a lot more GPU memory, because you'll also need to store:

\n\n\n\n

Therefore, if you want to do all computation on GPUs, you'll have no choice but to distribute this layer across multiple high-memory GPUs.

\n\n

However, you now have another problem:

\n\n

To make this concrete, let's suppose you have a 2-level HSM with 3K classes, with 3K words per class (9M words in total). You distribute the 3K classes across 8 GPUs, so that each hosts 384 classes.

\n\n

What if all target words in a batch are from the same 384 classes, i.e. they belong to the same GPU? One GPU will be doing all the work, while the other 7 wait for it.

\n\n

The problem is that even if the target words in a batch belong to different GPUs, you'll still have the same performance as in the worst-case scenario, if you want to do this computation in TensorFlow (This is because TensorFlow is a \"specify-and-run\" framework -- the computational graph is the same for the best case and the worst case)

\n\n
\n

What is the best way to do this to both be scalable to large class counts and efficient?

\n
\n\n

The above inefficiency of model parallelism (each GPU must process the whole batch) suggests that one should try to keep everything in one place.

\n\n

Let us suppose that you are either implementing everything on the host, or on 1 humongous GPU.

\n\n
    \n
  1. If you are not modeling sequences, or if you are, but there is only one output for the whole sequence, then the memory overhead from copying the parameters, to which you referred, is negligible compared to the memory requirements described above:

    \n\n

    400 == batch size << number of classes == 3K

    \n\n

    In this case, you could simply use gather or embedding_lookup (Although the copying is inefficient)

  2. \n
  3. However, if you do model sequences of length, say, 100, with output at every time step, then the parameter copying becomes a big issue.

    \n\n

    In this case, I think you'll need to drop down to C++ / CUDA C and implement this whole layer and its gradient as a custom op.

  4. \n
\n", "system": "" }, { "instruction": "keras - cannot import name Conv2D", "input": "", "output": "

Try this: from keras.layers.convolutional import Conv2D

\n\n

Importing changed with the new keras. Are you sure you are not using keras >= 2?

\n\n
\n\n

NOTE:

\n\n

With tensorflow 2.0 keras is included. You can now import the layer with:

\n\n
from tensorflow.keras.layers import Conv2D\n
\n", "system": "" }, { "instruction": "Given a tensor flow model graph, how to find the input node and output node names", "input": "", "output": "

Try this:

\n\n

run python

\n\n
>>> import tensorflow as tf\n>>> gf = tf.GraphDef()\n>>> gf.ParseFromString(open('/your/path/to/graphname.pb','rb').read())\n
\n\n

and then

\n\n
>>> [n.name + '=>' +  n.op for n in gf.node if n.op in ( 'Softmax','Placeholder')]\n
\n\n

Then, you can get result similar to this:

\n\n
['Mul=>Placeholder', 'final_result=>Softmax']\n
\n\n

But I'm not sure it's the problem of node names regarding the error messages.\nI guess you provided wrong arguements when loading the graph file or your generated graph file is something wrong?

\n\n

Check this part:

\n\n
E/AndroidRuntime(16821): java.lang.IllegalArgumentException: Incompatible \nshapes: [1,224,224,3] vs. [32,1,1,2048]\n
\n\n

UPDATE:\n Sorry, \n if you're using (re)trained graph , then try this:

\n\n
[n.name + '=>' +  n.op for n in gf.node if n.op in ( 'Softmax','Mul')]\n
\n\n

It seems that (re)trained graph saves input/output op name as \"Mul\" and \"Softmax\", while optimized and/or quantized graph saves them as \"Placeholder\" and \"Softmax\".

\n\n

BTW, using retrained graph in mobile environment is not recommended according to Peter Warden's post: https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/ . It's better to use quantized or memmapped graph due to performance and file size issue, I couldn't find out how to load memmapped graph in android though...:(\n(no problem loading optimized / quantized graph in android)

\n", "system": "" }, { "instruction": "ValueError when executing softmax_cross_entropy_with_logits", "input": "", "output": "

Change

\n\n
tf.nn.softmax_cross_entropy_with_logits(prediction,y)\n
\n\n

to

\n\n
tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y)\n
\n", "system": "" }, { "instruction": "Replace nan values in tensorflow tensor", "input": "", "output": "

A combination of tf.where and tf.is_nan should work:

\n\n
import tensorflow as tf\nwith tf.Session():\n    has_nans = tf.constant([float('NaN'), 1.])\n    print(tf.where(tf.is_nan(has_nans), tf.zeros_like(has_nans), has_nans).eval())\n
\n\n

Prints (using TensorFlow 0.12.1):

\n\n
[ 0.  1.]\n
\n", "system": "" }, { "instruction": "How to run Keras on multiple cores?", "input": "", "output": "

Tensorflow automatically runs the computations on as many cores as are available on a single machine.

\n\n

If you have a distributed cluster, be sure you follow the instructions at https://www.tensorflow.org/how_tos/distributed/ to configure the cluster. (e.g. create the tf.ClusterSpec correctly, etc.)

\n\n

To help debug, you can use the log_device_placement configuration options on the session to have Tensorflow print out where the computations are actually placed. (Note: this works for both GPUs as well as distributed Tensorflow.)

\n\n
# Creates a session with log_device_placement set to True.\nsess = tf.Session(config=tf.ConfigProto(log_device_placement=True))\n
\n\n

Note that while Tensorflow's computation placement algorithm works fine for small computational graphs, you might be able to get better performance on large computational graphs by manually placing the computations in specific devices. (e.g. using with tf.device(...): blocks.)

\n", "system": "" }, { "instruction": "Tensorflow AttributeError: 'NoneType' object has no attribute 'TF_DeleteStatus'", "input": "", "output": "

You can run import gc; gc.collect() in the end of your code.

\n", "system": "" }, { "instruction": "Is Intel based graphic card compatible with tensorflow/GPU?", "input": "", "output": "

At the moment no. Only Nvidia GPUs and (intel/amd) CPU versions are available.

\n\n

They are working on an openCL compatible version of tensorflow that would be compatible and you can follow its progress here

\n", "system": "" }, { "instruction": "TensorFlow REST Frontend but not TensorFlow Serving", "input": "", "output": "

There are different ways to do this. Purely, using tensorflow is not very flexible, however relatively straightforward. The downside of this approach is that you have to rebuild the graph and initialize variables in the code where you restore the model. There is a way shown in tensorflow skflow/contrib learn which is more elegant, however this doesn't seem to be functional at the moment and the documentation is out of date.

\n\n

I put a short example together on github here that shows how you would named GET or POST parameters to a flask REST-deployed tensorflow model.

\n\n

The main code is then in a function that takes a dictionary based on the POST/GET data:

\n\n
@app.route('/model', methods=['GET', 'POST'])\n@parse_postget\ndef apply_model(d):\n    tf.reset_default_graph()\n    with tf.Session() as session:\n        n = 1\n        x = tf.placeholder(tf.float32, [n], name='x')\n        y = tf.placeholder(tf.float32, [n], name='y')\n        m = tf.Variable([1.0], name='m')\n        b = tf.Variable([1.0], name='b')\n        y = tf.add(tf.mul(m, x), b) # fit y_i = m * x_i + b\n        y_act = tf.placeholder(tf.float32, [n], name='y_')\n        error = tf.sqrt((y - y_act) * (y - y_act))\n        train_step = tf.train.AdamOptimizer(0.05).minimize(error)\n\n        feed_dict = {x: np.array([float(d['x_in'])]), y_act: np.array([float(d['y_star'])])}\n        saver = tf.train.Saver()\n        saver.restore(session, 'linear.chk')\n        y_i, _, _ = session.run([y, m, b], feed_dict)\n    return jsonify(output=float(y_i))\n
\n", "system": "" }, { "instruction": "How to permutate tranposition in tensorflow?", "input": "", "output": "

I think perm is permuting the dimensions. For example perm=[0,2,1] is short for dim_0 -> dim_0, dim_1 -> dim_2, dim_2 -> dim_1. So for a 2D tensor, perm=[1,0] is just matrix transpose. Does this answer your question?

\n", "system": "" }, { "instruction": "Convert Keras model to TensorFlow protobuf", "input": "", "output": "

In case you don't need to utilize a GPU in the environment you are deploying to, you could also use my library, called frugally-deep. It is available on GitHub and published under the MIT License: https://github.com/Dobiasd/frugally-deep

\n\n

frugally-deep allows running forward passes on already-trained Keras models directly in C++ without the need to link against TensorFlow or any other backend.

\n", "system": "" }, { "instruction": "Convert Keras model to TensorFlow protobuf", "input": "", "output": "

In case you don't need to utilize a GPU in the environment you are deploying to, you could also use my library, called frugally-deep. It is available on GitHub and published under the MIT License: https://github.com/Dobiasd/frugally-deep

\n\n

frugally-deep allows running forward passes on already-trained Keras models directly in C++ without the need to link against TensorFlow or any other backend.

\n", "system": "" }, { "instruction": "TypeError: Expected float32 passed to parameter 'y' of op 'Equal', got 'auto' of type 'str' instead", "input": "", "output": "

Try changing

\n
model.compile(optimizer='adam', loss=tf.keras.losses.MeanSquaredError)\n
\n

to

\n
model.compile(optimizer='adam', loss=tf.keras.losses.MeanSquaredError())\n
\n", "system": "" }, { "instruction": "using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function", "input": "", "output": "

I have stumble over this also hence i am leaving my solution to this problem to help anyone.

\n

There is a catch when you are in eager execution mode since tf upgraded to 2.x, if you are using keras API loss and metrics you should instantiate them in order to compile.
\nSee the example below:

\n
model.compile(optimizer="...", \n              loss=keras.losses.AnyLoss, \n              metrics=[keras.metrics.AnyMetric])\n
\n

Above code will give OperatorNotAllowedInGraphError. To overcome do this;

\n
my_loss = keras.losses.AnyLoss(lr, *args, **kwargs)\nmy_metric = keras.metrics.AnyMetric(*args, **kwargs)\n\nmodel.compile(optimizer,\n              loss = my_loss\n              metrics = [my_metric_1, my_metric_2...]\n
\n

That should do the trick

\n", "system": "" }, { "instruction": "Tensorflow CUDA - CUPTI error: CUPTI could not be loaded or symbol could not be found", "input": "", "output": "

Add this in path for Windows:

\n\n
C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.0\\extras\\CUPTI\\libx64\n
\n", "system": "" }, { "instruction": "module 'tensorflow' has no attribute 'GPUOptions'", "input": "", "output": "

Tensorflow 2.x has undergone major changes from 1.x.

\n

As per official communication,

\n
\n

tf.contrib will be removed from the core TensorFlow repository and build process. TensorFlow\u2019s contrib module has grown beyond what can be maintained and supported in a single repository. Larger projects are better maintained separately, while smaller extensions will graduate to the core TensorFlow code. A special interest group (SIG) has been formed to maintain and further develop some of the more important contrib projects going forward. Please engage with this RFC if you are interested in contributing.

\n
\n

If you want to use the tensorflow 1.x functions/methods, there is a compatibility module kept in tensorflow 2.x.

\n
tf.compat.v1.GPUOptions(per_process_gpu_memory_fraction=0.333)\n
\n", "system": "" }, { "instruction": "tensorflow: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`", "input": "", "output": "

This depreciation warning is due to the Dropout layer in tf.keras.layers.Dropout.
To avoid this warning, you need to clearly specify rate= in Dropout as: Dropout(rate=0.2).\n

Earlier it was keep_prob and it is now deprecated to rate i.e. rate = 1-keep_prob. \n
For more, you can check out this tensorflow documentation.

\n", "system": "" }, { "instruction": "Neural network for square (x^2) approximation", "input": "", "output": "

You are making two very basic mistakes:

\n\n\n\n

It is certainly understood that neural networks need to be of some complexity if they are to solve problems even as \"simple\" as x*x; and where they really shine is when fed with large training datasets.

\n\n

The methodology when trying to solve such function approximations is not to just list the (few possible) inputs and then fed to the model, along with the desired outputs; remember, NNs learn through examples, and not through symbolic reasoning. And the more examples the better. What we usually do in similar cases is to generate a large number of examples, which we subsequently feed to the model for training.

\n\n

Having said that, here is a rather simple demonstration of a 3-layer neural network in Keras for approximating the function x*x, using as input 10,000 random numbers generated in [-50, 50]:

\n\n
import numpy as np\nimport keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.optimizers import Adam\nfrom keras import regularizers\nimport matplotlib.pyplot as plt\n\nmodel = Sequential()\nmodel.add(Dense(8, activation='relu', kernel_regularizer=regularizers.l2(0.001), input_shape = (1,)))\nmodel.add(Dense(8, activation='relu', kernel_regularizer=regularizers.l2(0.001)))\nmodel.add(Dense(1))\n\nmodel.compile(optimizer=Adam(),loss='mse')\n\n# generate 10,000 random numbers in [-50, 50], along with their squares\nx = np.random.random((10000,1))*100-50\ny = x**2\n\n# fit the model, keeping 2,000 samples as validation set\nhist = model.fit(x,y,validation_split=0.2,\n             epochs= 15000,\n             batch_size=256)\n\n# check some predictions:\nprint(model.predict([4, -4, 11, 20, 8, -5]))\n# result:\n[[ 16.633354]\n [ 15.031291]\n [121.26833 ]\n [397.78638 ]\n [ 65.70035 ]\n [ 27.040245]]\n
\n\n

Well, not that bad! Remember that NNs are function approximators: we should expect them neither to exactly reproduce the functional relationship nor to \"know\" that the results for 4 and -4 should be identical.

\n\n

Let's generate some new random data in [-50,50] (remember, for all practical purposes, these are unseen data for the model) and plot them, along with the original ones, to get a more general picture:

\n\n
plt.figure(figsize=(14,5))\nplt.subplot(1,2,1)\np = np.random.random((1000,1))*100-50 # new random data in [-50, 50]\nplt.plot(p,model.predict(p), '.')\nplt.xlabel('x')\nplt.ylabel('prediction')\nplt.title('Predictions on NEW data in [-50,50]')\n\nplt.subplot(1,2,2)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.plot(x,y,'.')\nplt.title('Original data')\n
\n\n

Result:

\n\n

\"enter

\n\n

Well, it arguably does look like a good approximation indeed...

\n\n

You could also take a look at this thread for a sine approximation.

\n\n

The last thing to keep in mind is that, although we did get a decent approximation even with our relatively simple model, what we should not expect is extrapolation, i.e. good performance outside [-50, 50]; for details, see my answer in Is deep learning bad at fitting simple non linear functions outside training scope?

\n", "system": "" }, { "instruction": "Understanding COCO evaluation "maximum detections"", "input": "", "output": "

You can change the maxDets parameter and define a new summarize() instance method.

\n\n

Let's create a COCOeval object:

\n\n
cocoEval = COCOeval(cocoGt,cocoDt,annType)\ncocoEval.params.maxDets = [200]\ncocoEval.params.imgIds  = imgIdsDt\ncocoEval.evaluate()\ncocoEval.accumulate()\ncocoEval.summarize_2() # instead of calling cocoEval.summarize()\n
\n\n

Now, define summarize_2() method in cocoeval.py module in the following way:

\n\n
def summarize_2(self):\n    # Copy everything from `summarize` method here except\n    # the function `_summarizeDets()`.\n    def _summarizeDets():\n        stats = np.zeros((12,))\n        stats[0] = _summarize(1, maxDets=self.params.maxDets[0])\n        stats[1] = _summarize(1, iouThr=.5, maxDets=self.params.maxDets[0])\n        stats[2] = _summarize(1, iouThr=.75, maxDets=self.params.maxDets[0])\n        stats[3] = _summarize(1, areaRng='small', maxDets=self.params.maxDets[0])\n        stats[4] = _summarize(1, areaRng='medium', maxDets=self.params.maxDets[0])\n        stats[5] = _summarize(1, areaRng='large', maxDets=self.params.maxDets[0])\n        stats[6] = _summarize(0, maxDets=self.params.maxDets[0])\n        stats[9] = _summarize(0, areaRng='small', maxDets=self.params.maxDets[0])\n        stats[10] = _summarize(0, areaRng='medium', maxDets=self.params.maxDets[0])\n        stats[11] = _summarize(0, areaRng='large', maxDets=self.params.maxDets[0])\n        return stats\n    # Copy other things which are left from `summarize()` here.\n
\n\n

If you run the above method over your dataset, you will get an output similar to this:

\n\n
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=200 ] = 0.507\n Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=200 ] = 0.699\n Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=200 ] = 0.575\n Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=200 ] = 0.586\n Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=200 ] = 0.519\n Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=200 ] = 0.501\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=200 ] = 0.598\n Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=200 ] = 0.640\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=200 ] = 0.566\n Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=200 ] = 0.564\n
\n", "system": "" }, { "instruction": "Tensorflow: ImportError: libcudnn.so.7: cannot open shared object file: No such file or directory", "input": "", "output": "

You are setting LD_LIBRARY_PATH in the wrong way, I would recommend to do it this way (which is kind of the standard):

\n\n
export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64:$LD_LIBRARY_PATH\nexport LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH\n
\n", "system": "" }, { "instruction": "Tensorflow install fails with "compiletime version 3.5 of module does not match runtime version 3.6"", "input": "", "output": "
\n

RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6

\n
\n\n

This is a known issue, which is got prioritized and likely to be fixed soon. Right now the workaround is to use python 3.5.

\n\n

UPDATE:

\n\n

The issue has been fixed in the nightly tensorflow builds: \"tf-nightly and tf-nightly-gpu now has a python3.6 binary built from scratch for Linux.\"

\n\n

I.e., the following command should work with python 3.6:

\n\n
# tf-nightly or tf-nightly-gpu\npip3 install tf-nightly\n
\n\n
\n

Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX

\n
\n\n

This warning comes from the fact that the default tensorflow distributions are compiled without CPU extensions support (more on this here). If you want to get a CPU optimized tensorflow package, your only option is to build it yourself. It's a bit tedious, but absolutely doable. The build will produce the wheel file, which you can install with just

\n\n
pip3 install /path/to/the/tensorflow.whl\n
\n\n

But if you just want to suppress the warning, this will do:

\n\n
import os\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'\n
\n", "system": "" }, { "instruction": "How to get a tensorflow op by name?", "input": "", "output": "

You can use the tf.Graph.get_operation_by_name() method to get a tf.Operation by name. For example, to get an operation called \"enqueue\" from the default graph:

\n\n
op = tf.get_default_graph().get_operation_by_name(\"enqueue\")\n
\n", "system": "" }, { "instruction": "What does `tf.strided_slice()` do?", "input": "", "output": "

I experimented a bit with this method, which gave me some insights, which I think might be of some use. let's say we have a tensor.

\n\n\n\n
a = np.array([[[1, 1.2, 1.3], [2, 2.2, 2.3], [7, 7.2, 7.3]],\n              [[3, 3.2, 3.3], [4, 4.2, 4.3], [8, 8.2, 8.3]],\n              [[5, 5.2, 5.3], [6, 6.2, 6.3], [9, 9.2, 9.3]]]) \n# a.shape = (3, 3, 3)\n
\n\n

strided_slice() requires 4 required arguments input_, begin, end, strides in which we are giving our a as input_ argument.\n As the case with tf.slice() method, the begin argument is zero-based and rest of args shape-based. However in the docs begin and end both are zero-based.

\n\n

The functionality of method is quite simple:
\nIt works like iterating over a loop, where begin is the location of element in the tensor from where the loop initiates and end is where it stops.

\n\n
tf.strided_slice(a, [0, 0, 0], [3, 3, 3], [1, 1, 1])\n\n# output =  the tensor itself\n\ntf.strided_slice(a, [0, 0, 0], [3, 3, 3], [2, 2, 2])\n\n# output = [[[ 1.   1.3]\n#            [ 7.   7.3]]\n#           [[ 5.   5.3]\n#            [ 9.   9.3]]]\n
\n\n

strides are like steps over which the loop iterates, here the [2,2,2] makes method to produce values starting at (0,0,0), (0,0,2), (0,2,0), (0,2,2), (2,0,0), (2,0,2) ..... in the a tensor.

\n\n
tf.strided_slice(input3, [1, 1, 0], [2, -1, 3], [1, 1, 1]) \n
\n\n

will produce output similar to tf.strided_slice(input3, [1, 1, 0], [2, 2, 3], [1, 1, 1]) as the tensora has shape = (3,3,3).

\n", "system": "" }, { "instruction": "TensorBoard Embedding Example?", "input": "", "output": "

I've used FastText's pre-trained word vectors with TensorBoard.

\n\n
import os\nimport tensorflow as tf\nimport numpy as np\nimport fasttext\nfrom tensorflow.contrib.tensorboard.plugins import projector\n\n# load model\nword2vec = fasttext.load_model('wiki.en.bin')\n\n# create a list of vectors\nembedding = np.empty((len(word2vec.words), word2vec.dim), dtype=np.float32)\nfor i, word in enumerate(word2vec.words):\n    embedding[i] = word2vec[word]\n\n# setup a TensorFlow session\ntf.reset_default_graph()\nsess = tf.InteractiveSession()\nX = tf.Variable([0.0], name='embedding')\nplace = tf.placeholder(tf.float32, shape=embedding.shape)\nset_x = tf.assign(X, place, validate_shape=False)\nsess.run(tf.global_variables_initializer())\nsess.run(set_x, feed_dict={place: embedding})\n\n# write labels\nwith open('log/metadata.tsv', 'w') as f:\n    for word in word2vec.words:\n        f.write(word + '\\n')\n\n# create a TensorFlow summary writer\nsummary_writer = tf.summary.FileWriter('log', sess.graph)\nconfig = projector.ProjectorConfig()\nembedding_conf = config.embeddings.add()\nembedding_conf.tensor_name = 'embedding:0'\nembedding_conf.metadata_path = os.path.join('log', 'metadata.tsv')\nprojector.visualize_embeddings(summary_writer, config)\n\n# save the model\nsaver = tf.train.Saver()\nsaver.save(sess, os.path.join('log', \"model.ckpt\"))\n
\n\n

Then run this command in your terminal:

\n\n
tensorboard --logdir=log\n
\n", "system": "" }, { "instruction": "Non-smooth and non-differentiable customized loss function tensorflow", "input": "", "output": "

The problem is not with the loss being piece-wise or non-smooth. The problem is that we need a loss function that can send back a non-zero gradient to the network parameters (dloss/dparameter) when there is an error between the output and the expected output. This applies to almost any function used inside the model (e.g. loss functions, activation functions, attention functions).

\n\n

For example, Perceptrons use a unit step H(x) as an activation function (H(x) = 1 if x > 0 else 0). since the derivative of H(x) is always zero (undefined at x=0), No gradient coming from the loss will pass through it back to the weights (chain rule), so no weights before that function in the network can be updated using gradient descent. Based on that, gradient descent can't be used for perceptrons but can be used for conventional neurons that uses the sigmoid activation function (since the gradient is not zero for all x).

\n\n

For Relu, the derivative is 1 for x > 0 and 0 otherwise. while the derivative is undefined at x=0, we still can back-propagate the loss gradient through it when x>0. That's why it can be used.

\n\n

That is why we need a loss function that has a non-zero gradient. Functions like accuracy and F1 have zero gradients everywhere (or undefined at some values of x), so they can't be used, while functions like cross-entropy, L2 and L1 have non-zero gradients, so they can be used. (note that L1 \"absolute difference\" is piece-wise and not smooth at x=0 but still can be used)

\n\n

In case you must use a function that doesn't meet the above criteria, try reinforcement learning methods instead (e.g. Policy gradient).

\n", "system": "" }, { "instruction": "Where is gen_math_ops script in tensorflow?", "input": "", "output": "

It's automatically generated by tf_gen_op_wrapper_* rules here.

\n\n

Also you can use ?? in your IPython notebook to find location

\n\n

\"example

\n", "system": "" }, { "instruction": "No variable to save error in Tensorflow", "input": "", "output": "

The error here is quite subtle. In In[8] you create a tf.Graph called graph and set it as default for the with graph.as_default(): block. This means that all of the variables are created in graph, and if you print graph.all_variables() you should see a list of your variables.

\n\n

However, you exit the with block before creating (i) the tf.Session, and (ii) the tf.train.Saver. This means that the session and saver are created in a different graph (the global default tf.Graph that is used when you don't explicitly create one and set it as default), which doesn't contain any variables—or any nodes at all.

\n\n

There are at least two solutions:

\n\n
    \n
  1. As Yaroslav suggests, you can write your program without using the with graph.as_default(): block, which avoids the confusion with multiple graphs. However, this can lead to name collisions between different cells in your IPython notebook, which is awkward when using the tf.train.Saver, since it uses the name property of a tf.Variable as the key in the checkpoint file.

  2. \n
  3. You can create the saver inside the with graph.as_default(): block, and create the tf.Session with an explicit graph, as follows:

    \n\n
    with graph.as_default():\n    # [Variable and model creation goes here.]\n\n    saver = tf.train.Saver()  # Gets all variables in `graph`.\n\nwith tf.Session(graph=graph) as sess:\n    saver.restore(sess)\n    # Do some work with the model....\n
    \n\n

    Alternatively, you can create the tf.Session inside the with graph.as_default(): block, in which case it will use graph for all of its operations.

  4. \n
\n", "system": "" }, { "instruction": "How do I check Bazel version?", "input": "", "output": "

See Bazel users manual

\n\n

From the command line:

\n\n
$ bazel version \nBuild label: 0.1.1\n
\n", "system": "" }, { "instruction": "TensorFlow for binary classification", "input": "", "output": "

The original MNIST example uses a one-hot encoding to represent the labels in the data: this means that if there are NLABELS = 10 classes (as in MNIST), the target output is [1 0 0 0 0 0 0 0 0 0] for class 0, [0 1 0 0 0 0 0 0 0 0] for class 1, etc. The tf.nn.softmax() operator converts the logits computed by tf.matmul(x, W) + b into a probability distribution across the different output classes, which is then compared to the fed-in value for y_.

\n

If NLABELS = 1, this acts as if there were only a single class, and the tf.nn.softmax() op would compute a probability of 1.0 for that class, leading to a cross-entropy of 0.0, since tf.log(1.0) is 0.0 for all of the examples.

\n

There are (at least) two approaches you could try for binary classification:

\n
    \n
  1. The simplest would be to set NLABELS = 2 for the two possible classes, and encode your training data as [1 0] for label 0 and [0 1] for label 1. This answer has a suggestion for how to do that.

    \n
  2. \n
  3. You could keep the labels as integers 0 and 1 and use tf.nn.sparse_softmax_cross_entropy_with_logits(), as suggested in this answer.

    \n
  4. \n
\n", "system": "" }, { "instruction": "Display image of graph in TensorFlow?", "input": "", "output": "

This is exactly what tensorboard was created for. You need to slightly modify your code to store the information about your graph.

\n\n
import tensorflow as tf\nC_1 = tf.constant(5.0)\nC_2 = tf.constant(1.0)\nC_3 = tf.constant(2.0)\n\ngolden_ratio = (tf.sqrt(C_1) + C_2)/C_3\n\nwith tf.Session() as sess:\n    writer = tf.summary.FileWriter('logs', sess.graph)\n    print sess.run(golden_ratio)\n    writer.close()\n
\n\n

This will create a logs folder with event files in your working directory. After this you should run tensorboard from your command line tensorboard --logdir=\"logs\" and navigate to the url it gives you (http://127.0.0.1:6006). In your browser go to GRAPHS tab and enjoy your graph.

\n\n

You will use TB a lot if you are going to do anything with TF. So it makes sense to learn about it more from official tutorials and from this video.

\n", "system": "" }, { "instruction": "The meaning of 'Start cannot spawn child process: No such file or directory' upon running Tensorflow", "input": "", "output": "

Execute

\n
$ export PATH="${PATH}:/usr/local/nvidia/bin:/usr/local/cuda/bin"\n
\n

before starting your cPython notebook kernel / interpreter.

\n", "system": "" }, { "instruction": "Colab: (0) UNIMPLEMENTED: DNN library is not found", "input": "", "output": "

It's worked for me (Colab)

\n
# Check libcudnn8 version\n!apt-cache policy libcudnn8\n\n# Install latest version\n!apt install --allow-change-held-packages libcudnn8=8.4.1.50-1+cuda11.6\n\n# Export env variables\n!export PATH=/usr/local/cuda-11.4/bin${PATH:+:${PATH}}\n!export LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64:$LD_LIBRARY_PATH\n!export LD_LIBRARY_PATH=/usr/local/cuda-11.4/include:$LD_LIBRARY_PATH\n!export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/extras/CUPTI/lib64\n\n# Install tensorflow\n!pip install tflite-model-maker==0.4.0\n!pip uninstall -y tensorflow && pip install -q tensorflow==2.9.1\n!pip install pycocotools==2.0.4\n!pip install opencv-python-headless==4.6.0.66\n
\n", "system": "" }, { "instruction": "Unable to (manually) load cifar10 dataset", "input": "", "output": "

I was having a similar CERTIFICATE_VERIFY_FAILED error downloading CIFAR-10. Putting this in my python file worked:

\n
import ssl\nssl._create_default_https_context = ssl._create_unverified_context\n
\n

Reference: https://programmerah.com/python-error-certificate-verify-failed-certificate-has-expired-40374/

\n", "system": "" }, { "instruction": "What is the proper use of Tensorflow dataset prefetch and cache options?", "input": "", "output": "

I found this great explanation for Andrew Ng from Stanford: https://cs230.stanford.edu/blog/datapipeline/#best-practices

\n

"When the GPU is working on forward / backward propagation on the current batch, we want the CPU to process the next batch of data so that it is immediately ready. As the most expensive part of the computer, we want the GPU to be fully used all the time during training. We call this consumer/producer overlap, where the consumer is the GPU and the producer is the CPU.

\n

With tf.data, you can do this with a simple call to dataset.prefetch(1) at the end of the pipeline (after batching). This will always prefetch one batch of data and make sure that there is always one ready.

\n

In some cases, it can be useful to prefetch more than one batch. For instance, if the duration of the preprocessing varies a lot, prefetching 10 batches would average out the processing time over 10 batches, instead of sometimes waiting for longer batches.

\n

To give a concrete example, suppose that 10% of the batches take 10s to compute, and 90% take 1s. If the GPU takes 2s to train on one batch, by prefetching multiple batches you make sure that we never wait for these rare longer batches."

\n

I'm not quite sure how to determine the processing time of each batch but that's the next step. If your batches are roughly taking the same amount of time to process then I believe prefetch(batch_size=1) should suffice as your GPU wouldn't be waiting for the CPU to finish processing a computationally expensive batch.

\n", "system": "" }, { "instruction": "How to add attention layer to a Bi-LSTM", "input": "", "output": "

This can be a possible custom solution with a custom layer that computes attention on the positional/temporal dimension

\n
from tensorflow.keras.layers import Layer\nfrom tensorflow.keras import backend as K\n\nclass Attention(Layer):\n    \n    def __init__(self, return_sequences=True):\n        self.return_sequences = return_sequences\n        super(Attention,self).__init__()\n        \n    def build(self, input_shape):\n        \n        self.W=self.add_weight(name="att_weight", shape=(input_shape[-1],1),\n                               initializer="normal")\n        self.b=self.add_weight(name="att_bias", shape=(input_shape[1],1),\n                               initializer="zeros")\n        \n        super(Attention,self).build(input_shape)\n        \n    def call(self, x):\n        \n        e = K.tanh(K.dot(x,self.W)+self.b)\n        a = K.softmax(e, axis=1)\n        output = x*a\n        \n        if self.return_sequences:\n            return output\n        \n        return K.sum(output, axis=1)\n
\n

it's build to receive 3D tensors and output 3D tensors (return_sequences=True) or 2D tensors (return_sequences=False). below a dummy example

\n
# dummy data creation\n\nmax_len = 100\nmax_words = 333\nemb_dim = 126\n\nn_sample = 5\nX = np.random.randint(0,max_words, (n_sample,max_len))\nY = np.random.randint(0,2, n_sample)\n
\n

with return_sequences=True

\n
model = Sequential()\nmodel.add(Embedding(max_words, emb_dim, input_length=max_len))\nmodel.add(Bidirectional(LSTM(32, return_sequences=True)))\nmodel.add(Attention(return_sequences=True)) # receive 3D and output 3D\nmodel.add(LSTM(32))\nmodel.add(Dense(1, activation='sigmoid'))\nmodel.summary()\n\nmodel.compile('adam', 'binary_crossentropy')\nmodel.fit(X,Y, epochs=3)\n
\n

with return_sequences=False

\n
model = Sequential()\nmodel.add(Embedding(max_words, emb_dim, input_length=max_len))\nmodel.add(Bidirectional(LSTM(32, return_sequences=True)))\nmodel.add(Attention(return_sequences=False)) # receive 3D and output 2D\nmodel.add(Dense(1, activation='sigmoid'))\nmodel.summary()\n\nmodel.compile('adam', 'binary_crossentropy')\nmodel.fit(X,Y, epochs=3)\n
\n

You can integrate it into your networks easily

\n

here the running notebook

\n", "system": "" }, { "instruction": "TypeError: '>' not supported between instances of 'NoneType' and 'float'", "input": "", "output": "

Tensorflow 2.0

\n\n
DESIRED_ACCURACY = 0.979\n\nclass myCallback(tf.keras.callbacks.Callback):\n    def on_epoch_end(self, epochs, logs={}) :\n        if(logs.get('acc') is not None and logs.get('acc') >= DESIRED_ACCURACY) :\n            print('\\nReached 99.9% accuracy so cancelling training!')\n            self.model.stop_training = True\n\ncallbacks = myCallback()\n
\n", "system": "" }, { "instruction": "What are symbolic tensors in TensorFlow and Keras?", "input": "", "output": "

According to blog.tensorflow.org, a symbolic tensor differs from other tensors in that they do not specifically hold values.

\n\n

Let's consider a simple example.

\n\n
>>> a = tf.Variable(5, name=\"a\")\n>>> b = tf.Variable(7, name=\"b\")\n>>> c = (b**2 - a**3)**5\n>>> print(c)\n
\n\n

The output is as follows:

\n\n
tf.Tensor(1759441920, shape=(), dtype=int32)\n
\n\n

For the above, the values are specifically defined in tf.Variable format, and the output is in Tensor format. However, the tensor must contain a value in order to be considered as such.

\n\n

Symbolic tensors are different in that no explicit values are required to define the tensor, and this has implications in terms of building neural networks with TensorFlow 2.0, which now uses Keras as the default API.

\n\n

Here is an example of a Sequential neural network that is used to build a classification model for predicting hotel cancellation incidences (full Jupyter Notebook here if interested):

\n\n
from tensorflow.keras import models\nfrom tensorflow.keras import layers\n\nmodel = models.Sequential()\nmodel.add(layers.Dense(8, activation='relu', input_shape=(4,)))\nmodel.add(layers.Dense(1, activation='sigmoid'))\n
\n\n

This is a symbolically defined model, as no values are explicitly being defined in the network. Rather, a framework is created for the input variables to be read by the network, and then generate predictions.

\n\n

In this regard, Keras has become quite popular given that it allows for building of graphs using symbolic tensors, while at the same time maintaining an imperative layout.

\n", "system": "" }, { "instruction": "How to fix "ResourceExhaustedError: OOM when allocating tensor"", "input": "", "output": "

OOM stands for "out of memory". Your GPU is running out of memory, so it can't allocate memory for this tensor. There are a few things you can do:

\n\n

There is more useful information about this error:

\n
OOM when allocating tensor with shape[800000,32,30,62]\n
\n

This is a weird shape. If you're working with images, you should normally have 3 or 1 channel. On top of that, it seems like you are passing your entire dataset at once; you should instead pass it in batches.

\n", "system": "" }, { "instruction": "ERROR: tensorboard 2.0.2 has requirement setuptools>=41.0.0, but you'll have setuptools 40.6.2 which is incompatible", "input": "", "output": "

I just did a pip install setuptools --upgrade

\n\n

then

\n\n

pip install tensorflow

\n", "system": "" }, { "instruction": "tf.data vs keras.utils.sequence performance", "input": "", "output": "

Both approaches overlap input data preprocessing with model training. keras.utils.sequence does this by running multiple Python processes, while tf.data does this by running multiple C++ threads.

\n\n

If your preprocessing is being done by a non-TensorFlow Python library such as PIL, keras.utils.sequence may work better for you since multiple processes are needed to avoid contention on Python's global interpreter lock.

\n\n

If you can express your preprocessing using TensorFlow operations, I would expect tf.data to give better performance.

\n\n

Some other things to consider:

\n\n\n", "system": "" }, { "instruction": "How exactly does LSTMCell from TensorFlow operates?", "input": "", "output": "

Tensorflow uses glorot_uniform() function to initialize the lstm kernel, which samples weights from a random uniform distribution. We need to fix a value for the kernel to get reproducible results:

\n\n
import tensorflow as tf\nimport numpy as np\n\nnp.random.seed(0)\ntimesteps = 7\nnum_input = 4\nx_val = np.random.normal(size = (1, timesteps, num_input))\n\nnum_units = 3\n\ndef glorot_uniform(shape):\n    limit = np.sqrt(6.0 / (shape[0] + shape[1]))\n    return np.random.uniform(low=-limit, high=limit, size=shape)\n\nkernel_init = glorot_uniform((num_input + num_units, 4 * num_units))\n
\n\n

My implementation of the LSTMCell (well, actually it's just slightly rewritten tensorflow's code):

\n\n
def sigmoid(x):\n    return 1. / (1 + np.exp(-x))\n\nclass LSTMCell():\n    \"\"\"Long short-term memory unit (LSTM) recurrent network cell.\n    \"\"\"\n    def __init__(self, num_units, initializer=glorot_uniform,\n               forget_bias=1.0, activation=np.tanh):\n        \"\"\"Initialize the parameters for an LSTM cell.\n        Args:\n          num_units: int, The number of units in the LSTM cell.\n          initializer: The initializer to use for the kernel matrix. Default: glorot_uniform\n          forget_bias: Biases of the forget gate are initialized by default to 1\n            in order to reduce the scale of forgetting at the beginning of\n            the training. \n          activation: Activation function of the inner states.  Default: np.tanh.\n        \"\"\"\n        # Inputs must be 2-dimensional.\n        self._num_units = num_units\n        self._forget_bias = forget_bias\n        self._activation = activation\n        self._initializer = initializer\n\n    def build(self, inputs_shape):\n        input_depth = inputs_shape[-1]\n        h_depth = self._num_units\n        self._kernel = self._initializer(shape=(input_depth + h_depth, 4 * self._num_units))\n        self._bias = np.zeros(shape=(4 * self._num_units))\n\n    def call(self, inputs, state):\n        \"\"\"Run one step of LSTM.\n        Args:\n          inputs: input numpy array, must be 2-D, `[batch, input_size]`.\n          state:  a tuple of numpy arrays, both `2-D`, with column sizes `c_state` and\n            `m_state`.\n        Returns:\n          A tuple containing:\n          - A `2-D, [batch, output_dim]`, numpy array representing the output of the\n            LSTM after reading `inputs` when previous state was `state`.\n            Here output_dim is equal to num_units.\n          - Numpy array(s) representing the new state of LSTM after reading `inputs` when\n            the previous state was `state`.  Same type and shape(s) as `state`.\n        \"\"\"\n        num_proj = self._num_units\n        (c_prev, m_prev) = state\n\n        input_size = inputs.shape[-1]\n\n        # i = input_gate, j = new_input, f = forget_gate, o = output_gate\n        lstm_matrix = np.hstack([inputs, m_prev]).dot(self._kernel)\n        lstm_matrix += self._bias\n\n        i, j, f, o = np.split(lstm_matrix, indices_or_sections=4, axis=0)\n        # Diagonal connections\n        c = (sigmoid(f + self._forget_bias) * c_prev + sigmoid(i) *\n               self._activation(j))\n\n        m = sigmoid(o) * self._activation(c)\n\n        new_state = (c, m)\n        return m, new_state\n\nX = x_val.reshape(x_val.shape[1:])\n\ncell = LSTMCell(num_units, initializer=lambda shape: kernel_init)\ncell.build(X.shape)\n\nstate = (np.zeros(num_units), np.zeros(num_units))\nfor i in range(timesteps):\n    x = X[i,:]\n    output, state = cell.call(x, state)\n    print(output)\n
\n\n

Produces output:

\n\n
[-0.21386017 -0.08401277 -0.25431477]\n[-0.22243588 -0.25817422 -0.1612211 ]\n[-0.2282134  -0.14207162 -0.35017249]\n[-0.23286737 -0.17129192 -0.2706512 ]\n[-0.11768674 -0.20717363 -0.13339118]\n[-0.0599215  -0.17756104 -0.2028935 ]\n[ 0.11437953 -0.19484555  0.05371994]\n
\n\n

While your Tensorflow code, if you replace the second line with

\n\n
lstm = tf.nn.rnn_cell.LSTMCell(num_units = num_units, initializer = tf.constant_initializer(kernel_init))\n
\n\n

returns:

\n\n
[[-0.2138602  -0.08401276 -0.25431478]]\n[[-0.22243595 -0.25817424 -0.16122109]]\n[[-0.22821338 -0.1420716  -0.35017252]]\n[[-0.23286738 -0.1712919  -0.27065122]]\n[[-0.1176867  -0.2071736  -0.13339119]]\n[[-0.05992149 -0.177561   -0.2028935 ]]\n[[ 0.11437953 -0.19484554  0.05371996]]\n
\n", "system": "" }, { "instruction": "How to install Keras with gpu support?", "input": "", "output": "

Adding to the answer below which is the correct answer in terms of recommending to use Anaconda package manager, but out of date in that there is now a keras-gpu package on Anaconda Cloud.

\n

So once you have Anaconda installed, you simply need to create a new environment where you want to install keras-gpu and execute the command:

\n

conda install -c anaconda keras-gpu

\n

This will install Keras along with both tensorflow and tensorflow-gpu libraries as the backend. (There is also no need to install separately the CUDA runtime and cudnn libraries as they are also included in the package - tested on Windows 10 and working).

\n", "system": "" }, { "instruction": "WARNING from Tensorflow when creating VGG16", "input": "", "output": "

It looks like there's an open git issue to clean this up in the keras code:

\n\n

https://github.com/tensorflow/minigo/issues/740

\n\n

You should be safe to ignore the warning, I don't believe you can change it without modifying the TF repo. You can disable warnings as mentioned here:

\n\n
tf.logging.set_verbosity(tf.logging.ERROR)\n
\n", "system": "" }, { "instruction": "CuDNNLSTM: Failed to call ThenRnnForward", "input": "", "output": "

Probably your are running out of memory on the gpu. Your network is very large with 11 million trainable parameters. Do you really need a 512*2 Output of your recurrent layer?

\n\n

Furthermore your embedding_dim is also quite large, while your vocabulary is quite small with 5k words. I guess your network is too complex for your problem. I would suggest to try an embedding size of 32 and a LSTM size of 32 as a start. If your accuracy is still bad, you can increase complexity.

\n\n
EMBEDDING_DIM = 32\nBidirectional(LSTM(32, return_sequences=False))(embedding)\n
\n", "system": "" }, { "instruction": "Resolving differences between Keras and scikit-learn for simple fully-connected neural network", "input": "", "output": "

Importing necessary libraries:

\n
import numpy as np\n\nfrom tensorflow. keras. models import Sequential\n\nfrom tensorflow. keras. layers import Dense\n\nfrom tensorflow. keras. utils import to_categorical\n\nfrom sklearn. datasets import load_iris\n\nfrom sklearn. cross_validation import train_test_split\n\nfrom sklearn. from sklearn.preprocessing import StandardScaler\n\nfrom sklearn. from sklearn.neural_network import MLPClassifier\n\nfrom sklearn. metrics import accuracy_score\n\nLoad the Iris dataset\n\ndata = load_iris()\n\nX = data.data\n\ny = data.target\n\nTrainset and Test set Partitioning\n\nX_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2/);;;random_state=42\n\nStandardize the data\n\nscaler = StandardScaler()\n\nX_train_scaled = scaler. fit_transform(X_train)\n\nX_test_scaled = scaler. transform(X_test)\nmodel_keras = Sequential() model_keras.add(Dense(10, input_dim=4, activation=\u2019relu\u2019)) model_keras.add(Dense(10, activation=\u2019relu\u2019)) model_keras.add(Dense(3, activation=\u2019softmax\u2019)) model_keras.compile(optimizer=\u2019adam\u2019, loss=\u2019categorical_crossentropy\u2019, metrics=[\u2018accuracy\u2019]) model_keras.fit(X_train_scaled, to_categorical(y_train), epochs=50, batch_size=5, verbose=0) y_pred_keras = model_keras.predict(X_test_scaled) y_pred_keras_classes = np.argmax(y_pred_keras, axis=1) accuracy_keras = accuracy_score(y_test, y_pred_keras_classes) model_sklearn = MLPClassifier(hidden_layer_sizes=(10, 10), max_iter=500, random_state=42) model_sklearn.fit(X_train_scaled, y_train) y_pred_sklearn = model_sklearn.predict(X_test_scaled) accuracy_sklearn = accuracy_score(y_test, y_pred_sklearn) print(\u201cKeras model achieved an accuracy of\u201d, accuracy_keras)\n
\n", "system": "" }, { "instruction": "Does TensorFlow 1.9 support Python 3.7", "input": "", "output": "

I was able to install Tensorflow 1.12.0 with Python 3.7 on MacOS, with the following command.

\n\n
sudo python3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.12.0-py3-none-any.whl\n
\n", "system": "" }, { "instruction": "Keras model.fit() with tf.dataset API + validation_data", "input": "", "output": "

I solved the problem by using fit_genertor. I found the solution here. I applied @Dat-Nguyen's solution.

\n\n

You need simply to create two iterators, one for training and one for validation and then create your own generator where you will extract batches from the dataset and provide the data in form of (batch_data, batch_labels) . Finally in model.fit_generator you will pass the train_generator and validation_generator.

\n", "system": "" }, { "instruction": "Warning: Please use alternatives such as official/mnist/dataset.py from tensorflow/models", "input": "", "output": "

tensorflow.examples.tutorials is now deprecated and it is recommended to use tensorflow.keras.datasets as follows:

\n\n
import tensorflow as tf\nmnist = tf.keras.datasets.mnist\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n
\n\n

https://www.tensorflow.org/api_docs/python/tf/keras/datasets/mnist/load_data

\n", "system": "" }, { "instruction": "How do I print inside the loss function during training in Keras?", "input": "", "output": "

The only thing you can do is not use python's print function, but for example, tensorflow's tf.Print function that is part of the computational graph. The documentation says the operation does nothing but each time it is evaluated it prints a message that you can specify.

\n\n

You just need to be careful to place it correctly in the graph, something like:

\n\n
def loss(y_true, y_pred):\n    d = y_true - y_pred\n    d = tf.Print(d, [d], \"Inside loss function\")\n    return tf.reduce_mean(tf.square(d))\n
\n\n

A better option to look inside what is going on internally is to use the tensorflow debugger.

\n", "system": "" }, { "instruction": "Keras Binary Classification - Sigmoid activation function", "input": "", "output": "

You can assign the threshold explicitly in compile() by using

\n
tf.keras.metrics.BinaryAccuracy(\n    name="binary_accuracy", dtype=None, threshold=0.5\n)\n
\n

like following:

\n
model.compile(optimizer='sgd',\n              loss='mse',\n              metrics=[tf.keras.metrics.BinaryAccuracy()])\n
\n", "system": "" }, { "instruction": "Generating MNIST numbers using LSTM-CGAN in TensorFlow", "input": "", "output": "

There are a few things you can do to improve your network architecture and training phase.

\n\n
    \n
  1. Remove the tf.nn.sigmoid(logit) from both the generator and discriminator. Return just the pred.
  2. \n
  3. Use a numerically stable function to calculate your loss functions and fix the loss functions:

    \n\n

    D_loss = -tf.reduce_mean(tf.log(D_real) + tf.log(1. - D_fake))\nG_loss = -tf.reduce_mean(tf.log(D_fake))

  4. \n
\n\n

should be:

\n\n
D_loss_real = tf.nn.sigmoid_cross_entropy_with_logits(\n              logits=D_real,\n              labels=tf.ones_like(D_real))\nD_loss_fake = tf.nn.sigmoid_cross_entropy_with_logits(\n              logits=D_fake,\n              labels=tf.zeros_like(D_fake))\n\nD_loss = -tf.reduce_mean(D_loss_real + D_loss_fake)\nG_loss = -tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(\n              logits=D_real,\n              labels=tf.ones_like(D_real)))\n
\n\n

Once you fixed the loss and used a numerically stable function, things will go better. Also, as a rule of thumb, if there's too much noise in the loss, reduce the learning rate (the default lr of ADAM is usually too high when training GANs).\nHope it helps

\n", "system": "" }, { "instruction": "Parallelism isn't reducing the time in dataset map", "input": "", "output": "

The problem here is that the only operation in the Dataset.map() function is a tf.py_func() op. This op calls back into the local Python interpreter to run a function in the same process. Increasing num_parallel_calls will increase the number of TensorFlow threads that attempt to call back into Python concurrently. However, Python has something called the \"Global Interpreter Lock\" that prevents more than one thread from executing code at once. As a result, all but one of these multiple parallel calls will be blocked waiting to acquire the Global Interpreter Lock, and there will be almost no parallel speedup (and perhaps even a slight slowdown).

\n\n

Your code example didn't include the definition of the squarer() function, but it might be possible to replace tf.py_func() with pure TensorFlow ops, which are implemented in C++, and can execute in parallel. For example—and just guessing by the name—you could replace it with an invocation of tf.square(x), and you might then enjoy some parallel speedup.

\n\n

Note however that if the amount of work in the function is small, like squaring a single integer, the speedup might not be very large. Parallel Dataset.map() is more useful for heavier operations, like parsing a TFRecord with tf.parse_single_example() or performing some image distortions as part of a data augmentation pipeline.

\n", "system": "" }, { "instruction": "Parallel threads with TensorFlow Dataset API and flat_map", "input": "", "output": "

To the best of my knowledge, at the moment flat_map does not offer parallelism options.\nGiven that the bulk of the computation is done in pre_processing_func, what you might use as a workaround is a parallel map call followed by some buffering, and then using a flat_map call with an identity lambda function that takes care of flattening the output.

\n\n\n\n

In code:

\n\n
NUM_THREADS = 5\nBUFFER_SIZE = 1000\n\ndef pre_processing_func(data_):\n    # data-augmentation here\n    # generate new samples starting from the sample `data_`\n    artificial_samples = generate_from_sample(data_)\n    return atificial_samples\n\ndataset_source = (tf.data.Dataset.from_tensor_slices(input_tensors).\n                  map(pre_processing_func, num_parallel_calls=NUM_THREADS).\n                  prefetch(BUFFER_SIZE).\n                  flat_map(lambda *x : tf.data.Dataset.from_tensor_slices(x)).\n                  shuffle(BUFFER_SIZE)) # my addition, probably necessary though\n
\n\n

Note (to myself and whoever will try to understand the pipeline):

\n\n

Since pre_processing_func generates an arbitrary number of new samples starting from the initial sample (organised in matrices of shape (?, 512)), the flat_map call is necessary to turn all the generated matrices into Datasets containing single samples (hence the tf.data.Dataset.from_tensor_slices(x) in the lambda) and then flatten all these datasets into one big Dataset containing individual samples.

\n\n

It's probably a good idea to .shuffle() that dataset, or generated samples will be packed together.

\n", "system": "" }, { "instruction": "How to implement Tensorflow batch normalization in LSTM", "input": "", "output": "

If you want to use batch norm for RNN (LSTM or GRU), you can check out this implementation , or read the full description from blog post.

\n\n

However, the layer-normalization has more advantage than batch norm in sequence data. Specifically, \"the effect of batch normalization is dependent on the mini-batch size and it is not obvious how to apply it to recurrent networks\" (from the paper Ba, et al. Layer normalization).

\n\n

For layer normalization, it normalizes the summed inputs within each layer. You can check out the implementation of layer-normalization for GRU cell:

\n", "system": "" }, { "instruction": "TensorFlow: How to handle void labeled data in image segmentation?", "input": "", "output": "

I'm not 100% familiar with TF. However, have you considered using the weights parameter of the loss?
\nLooking at tf.loses.sparse_softmax_cross_entropy it has a parameter weights

\n\n
\n

weights: Coefficients for the loss. This must be scalar or of same rank as labels

\n
\n\n

You can set weightof \"void\" pixels to zero, thus making the loss ignore them.

\n\n

You can also remove the reduction from tf.nn.sparse_softmax_cross_entropy_with_logits and use tf.losses.compute_weighted_loss to perform the weighting.

\n", "system": "" }, { "instruction": "How to deal with UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape", "input": "", "output": "

I managed to solve the issue by using tf.dynnamic_partition instead of tf.gather . I replaced the above code like this:

\n\n
# Flatten batch elements to rank-2 tensor where 1st max_length rows belong to first batch element and so forth\nall_timesteps = tf.reshape(raw_output, [-1, n_dim])  # (batch_size*max_length, n_dim)\n# Indices to last element of each sequence.\n# Index to first element is the sequence order number times max sequence length.\n# Index to last element is the index to first element plus sequence length.\nrow_inds = tf.range(0, batch_size) * max_length + (seq_len - 1)\n# Creating a vector of 0s and 1s that will specify what timesteps to choose.\npartitions = tf.reduce_sum(tf.one_hot(row_inds, tf.shape(all_timesteps)[0], dtype='int32'), 0)\n# Selecting the elements we want to choose.\nlast_timesteps = tf.dynamic_partition(all_timesteps, partitions, 2)  # (batch_size, n_dim)\nlast_timesteps = last_timesteps[1]\n
\n", "system": "" }, { "instruction": "why tensorflow just outputs killed", "input": "", "output": "

When I run your code I get the same behavior, after typing dmesg you'll see a trace like, which confirms what gdelab was hinting at:

\n\n
[38607.234089] python3 invoked oom-killer: gfp_mask=0x24280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=0, order=0, oom_score_adj=0\n[38607.234090] python3 cpuset=/ mems_allowed=0\n[38607.234094] CPU: 3 PID: 1420 Comm: python3 Tainted: G           O    4.9.0-3-amd64 #1 Debian 4.9.30-2+deb9u2\n[38607.234094] Hardware name: Dell Inc. XPS 15 9560/05FFDN, BIOS 1.2.4 03/29/2017\n[38607.234096]  0000000000000000 ffffffffa9f28414 ffffa50090317cf8 ffff940effa5f040\n[38607.234097]  ffffffffa9dfe050 0000000000000000 0000000000000000 0101ffffa9d82dd0\n[38607.234098]  e09c7db7f06d0ac2 00000000ffffffff 0000000000000000 0000000000000000\n[38607.234100] Call Trace:\n[38607.234104]  [<ffffffffa9f28414>] ? dump_stack+0x5c/0x78\n[38607.234106]  [<ffffffffa9dfe050>] ? dump_header+0x78/0x1fd\n[38607.234108]  [<ffffffffa9d8047a>] ? oom_kill_process+0x21a/0x3e0\n[38607.234109]  [<ffffffffa9d800fd>] ? oom_badness+0xed/0x170\n[38607.234110]  [<ffffffffa9d80911>] ? out_of_memory+0x111/0x470\n[38607.234111]  [<ffffffffa9d85b4f>] ? __alloc_pages_slowpath+0xb7f/0xbc0\n[38607.234112]  [<ffffffffa9d85d8e>] ? __alloc_pages_nodemask+0x1fe/0x260\n[38607.234113]  [<ffffffffa9dd7c3e>] ? alloc_pages_vma+0xae/0x260\n[38607.234115]  [<ffffffffa9db39ba>] ? handle_mm_fault+0x111a/0x1350\n[38607.234117]  [<ffffffffa9c5fd84>] ? __do_page_fault+0x2a4/0x510\n[38607.234118]  [<ffffffffaa207658>] ? page_fault+0x28/0x30\n...\n[38607.234158] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name\n...\n[38607.234332] [ 1396]  1000  1396  4810969  3464995    6959      21        0             0 python3\n[38607.234332] Out of memory: Kill process 1396 (python3) score 568 or sacrifice child\n[38607.234357] Killed process 1396 (python3) total-vm:19243876kB, anon-rss:13859980kB, file-rss:0kB, shmem-rss:0kB\n[38607.720757] oom_reaper: reaped process 1396 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB\n
\n\n

Which basically means python was starting too consume too much memory and the kernel decided to kill the process. If you add some prints in your code you'll see that mnist_classifier.train() is the function which is active. However some dumb tests (as removing the logging and lowering the steps, did not seem to help here).

\n", "system": "" }, { "instruction": "how to load and use a saved model on tensorflow?", "input": "", "output": "

What was missing was the signature

\n\n
# Saving\nbuilder = tf.saved_model.builder.SavedModelBuilder(export_dir)\nbuilder.add_meta_graph_and_variables(sess, [\"tag\"], signature_def_map= {\n        \"model\": tf.saved_model.signature_def_utils.predict_signature_def(\n            inputs= {\"x\": x},\n            outputs= {\"finalnode\": model})\n        })\nbuilder.save()\n\n# loading\nwith tf.Session(graph=tf.Graph()) as sess:\n    tf.saved_model.loader.load(sess, [\"tag\"], export_dir)\n    graph = tf.get_default_graph()\n    x = graph.get_tensor_by_name(\"x:0\")\n    model = graph.get_tensor_by_name(\"finalnode:0\")\n    print(sess.run(model, {x: [5, 6, 7, 8]}))\n
\n", "system": "" }, { "instruction": "Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution'", "input": "", "output": "

By default, Convolution2D (https://keras.io/layers/convolutional/) expects the input to be in the format (samples, rows, cols, channels), which is \"channels-last\". Your data seems to be in the format (samples, channels, rows, cols). You should be able to fix this using the optional keyword data_format = 'channels_first' when declaring the Convolution2D layer.

\n\n
model.add(Convolution2D(32, (3, 3), activation='relu', input_shape=(1,28,28), data_format='channels_first'))\n
\n", "system": "" }, { "instruction": "CNN Image Recognition with Regression Output on Tensorflow", "input": "", "output": "

Check out the Udacity self-driving-car models which take an input image from a dash cam and predict a steering angle (i.e. continuous scalar) to stay on the road...usually using a regression output after one or more fully connected layers on top of the CNN layers.

\n

https://github.com/udacity/self-driving-car/tree/master/steering-models/community-models

\n

Here is a typical model:

\n

https://github.com/udacity/self-driving-car/tree/master/steering-models/community-models/autumn

\n

...it uses tf.atan() or you can use tf.tanh() or just linear to get your final output y.

\n

Use MSE for your loss function.

\n

Here is another example in keras...

\n
model = models.Sequential()\nmodel.add(convolutional.Convolution2D(16, 3, 3, input_shape=(32, 128, 3), activation='relu'))\nmodel.add(pooling.MaxPooling2D(pool_size=(2, 2)))\nmodel.add(convolutional.Convolution2D(32, 3, 3, activation='relu'))\nmodel.add(pooling.MaxPooling2D(pool_size=(2, 2)))\nmodel.add(convolutional.Convolution2D(64, 3, 3, activation='relu'))\nmodel.add(pooling.MaxPooling2D(pool_size=(2, 2)))\nmodel.add(core.Flatten())\nmodel.add(core.Dense(500, activation='relu'))\nmodel.add(core.Dropout(.5))\nmodel.add(core.Dense(100, activation='relu'))\nmodel.add(core.Dropout(.25))\nmodel.add(core.Dense(20, activation='relu'))\nmodel.add(core.Dense(1))\nmodel.compile(optimizer=optimizers.Adam(lr=1e-04), loss='mean_squared_error')\n
\n

They key difference from the MNIST examples is that instead of funneling down to a N-dim vector of logits into softmax w/ cross entropy loss, for your regression output you take it down to a 1-dim vector w/ MSE loss. (you can also have a mix of multiple classification and regression outputs in the final layer...like in YOLO object detection)

\n", "system": "" }, { "instruction": "How to get weights in tf.layers.dense?", "input": "", "output": "

The weights are added as a variable named kernel, so you could use

\n\n
x = tf.dense(...)\nweights = tf.get_default_graph().get_tensor_by_name(\n  os.path.split(x.name)[0] + '/kernel:0')\n
\n\n

You can obviously replace tf.get_default_graph() by any other graph you are working in.  

\n", "system": "" }, { "instruction": "libcublas.so.8.0 error with tensorflow", "input": "", "output": "

You need to install Cuda 8.0 and configure the environment as below:

\n\n
export PATH=\"$PATH:/usr/local/cuda-8.0/bin\"\nexport LD_LIBRARY_PATH=\"/usr/local/cuda-8.0/lib64\"\n
\n", "system": "" }, { "instruction": "TensorFlow TypeError: Value passed to parameter input has DataType uint8 not in list of allowed values: float16, float32", "input": "", "output": "

The image from your input pipeline is of type 'uint8', you need to type cast it to 'float32', You can do this after the image jpeg decoder:

\n\n
image = tf.image.decode_jpeg(...\nimage = tf.cast(image, tf.float32)\n
\n", "system": "" }, { "instruction": "Replace Validation Monitors with tf.train.SessionRunHook when using Estimators", "input": "", "output": "

There's an undocumented utility called monitors.replace_monitors_with_hooks() which converts monitors to hooks. The method accepts (i) a list which may contain both monitors and hooks and (ii) the Estimator for which the hooks will be used, and then returns a list of hooks by wrapping a SessionRunHook around each Monitor.

\n\n
from tensorflow.contrib.learn.python.learn import monitors as monitor_lib\n\nclf = tf.estimator.Estimator(...)\n\nlist_of_monitors_and_hooks = [tf.contrib.learn.monitors.ValidationMonitor(...)]\nhooks = monitor_lib.replace_monitors_with_hooks(list_of_monitors_and_hooks, clf)\n
\n\n

This isn't really a true solution to the problem of fully replacing the ValidationMonitor—we're just wrapping it up with a non-deprecated function instead. However, I can say this has worked for me so far in that it maintained all the functionality I need from the ValidationMonitor (i.e. evaluating every n steps, early stopping using a metric, etc.)

\n\n

One more thing—to use this hook you'll need to update from a tf.contrib.learn.Estimator (which only accepts monitors) to the more full-fledged and official tf.estimator.Estimator (which only accepts hooks). So, you should instantiate your classifier as a tf.estimator.DNNClassifier, and train using its method train() instead (which is just a re-naming of fit()):

\n\n
clf = tf.estimator.Estimator(...)\n\n...\n\nclf.train(\n    input_fn=...\n    ...\n    hooks=hooks)\n
\n", "system": "" }, { "instruction": "No Module Named '_pywrap_tensorflow_internal'", "input": "", "output": "

I came across the same issue today, please switch to cuDNN v5.1 Library for Windows instead as @mickdelaney suggested and then try to

\n\n
    \n
  1. Check environment settings of CUDA, normally all the settings of CUDA had been added to Windows environment

  2. \n
  3. Copy files in bin, lib and include of cuDNN to bin, lib and include of CUDA respectively. Normally the directory is C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA

  4. \n
\n\n

And then you can import tensorflow and run your code. Good luck!

\n", "system": "" }, { "instruction": "Keras - is it possible to view the weights and biases of models in Tensorboard", "input": "", "output": "

You can get the weights and biases per layer and for the entire model with .get_weights().

\n\n

For example if the first layer of your model is the dense layer for which you would like to have your weights and biases, you can get them with:

\n\n
weights, biases = model.layers[0].get_weights()\n
\n", "system": "" }, { "instruction": "Looping over a tensor", "input": "", "output": "

To loop over a tensor you could try tf.unstack

\n\n
\n

Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors.

\n
\n\n

So adding 1 to each tensor would look something like:

\n\n
import tensorflow as tf\nx = tf.placeholder(tf.float32, shape=(None, 10))\nx_unpacked = tf.unstack(x) # defaults to axis 0, returns a list of tensors\n\nprocessed = [] # this will be the list of processed tensors\nfor t in x_unpacked:\n    # do whatever\n    result_tensor = t + 1\n    processed.append(result_tensor)\n\noutput = tf.concat(processed, 0)\n\nwith tf.Session() as sess:\n    print(sess.run([output], feed_dict={x: np.zeros((5, 10))}))\n
\n\n

Obviously you can further unpack each tensor from the list to process it, down to single elements. To avoid lots of nested unpacking though, you could maybe try flattening x with tf.reshape(x, [-1]) first, and then loop over it like

\n\n
flattened_unpacked = tf.unstack(tf.reshape(x, [-1])\nfor elem in flattened_unpacked:\n    process(elem)\n
\n\n

In this case elem is a scalar.

\n", "system": "" }, { "instruction": "Keras error "You must feed a value for placeholder tensor 'bidirectional_1/keras_learning_phase' with dtype bool"", "input": "", "output": "

Try to import K and set learning phase before your model.

\n\n
from keras import backend as K\n\nK.set_learning_phase(1) #set learning phase\n
\n\n

From this issue

\n", "system": "" }, { "instruction": "Tensorflow equivalent to numpy.diff", "input": "", "output": "

Try this:

\n\n
def tf_diff_axis_0(a):\n    return a[1:]-a[:-1]\n\ndef tf_diff_axis_1(a):\n    return a[:,1:]-a[:,:-1]\n
\n\n

To check:

\n\n
import numpy as np\nimport tensorflow as tf\n\nx0=np.arange(5)+np.zeros((5,5))\nsess = tf.Session()\nnp.diff(x0, axis=0) == sess.run(tf_diff_axis_0(tf.constant(x0)))\nnp.diff(x0, axis=1) == sess.run(tf_diff_axis_1(tf.constant(x0)))\n
\n", "system": "" }, { "instruction": "How to set the input of a Keras layer with a Tensorflow tensor?", "input": "", "output": "

After you are done with pre-processing, You can add the tensor as input layer by calling tensor param of Input

\n\n

So in your case:

\n\n
tf_embedding_input = ...    # pre-processing output tensor\n\n# Keras model\nmodel = Sequential()\nmodel.add(Input(tensor=tf_embedding_input)) \nmodel.add(Embedding(max_features, 128, input_length=maxlen))\n
\n", "system": "" }, { "instruction": "TensorBoard Distributions and Histograms with Keras and fit_generator", "input": "", "output": "

There is no easy way to just plug it in with one line of code, you have to write your summaries by hand.

\n

The good news is that it's not difficult and you can use the TensorBoard callback code in Keras as a reference.\n(There is also a version 2 ready for TensorFlow 2.x.)

\n

Basically, write a function e.g. write_summaries(model) and call it whenever you want to write your summaries (e.g. just after your fit_generator())

\n

Inside your write_summaries(model) function use tf.summary, histogram_summary and other summary functions to log data you want to see on tensorboard.

\n

If you don't know exactly how to check official tutorial:\nand this great example of MNIST with summaries.

\n", "system": "" }, { "instruction": "Loss suddenly increases with Adam Optimizer in Tensorflow", "input": "", "output": "

My experience over the last months is the following:\nAdam is very easy to use because you don't have to play with initial learning rate very much and it almost always works. However, when coming to convergence Adam does not really sattle with a solution but jiggles around at higher iterations. While SGD gives an almost perfectly shaped loss plot and seems to converge much better in higher iterations.\nBut changing litte parts of the setup requires to adjust the SGD parameters or you will end up with NaNs... For experiments on architectures and general approaches I favor Adam, but if you want to get the best version of one chosen architecture you should use SGD and at least compare the solutions.

\n\n

I also noticed that a good initial SGD setup (learning rate, weight decay etc.) converges as fast as using Adam, at leas for my setup.\nHope this may help some of you!

\n\n

EDIT: Please note that the effects in my initial question are NOT normal even with Adam. Seems like I had a bug but I can't really remember the issue there.

\n", "system": "" }, { "instruction": "How do you make TensorFlow + Keras fast with a TFRecord dataset?", "input": "", "output": "

I don't use tfrecord dataset format so won't argue on the pros and cons, but I got interested in extending Keras to support the same.

\n\n

github.com/indraforyou/keras_tfrecord is the repository. Will briefly explain the main changes.

\n\n
\n

Dataset creation and loading

\n
\n\n

data_to_tfrecord and read_and_decode here takes care of creating tfrecord dataset and loading the same. Special care must be to implement the read_and_decode otherwise you will face cryptic errors during training.

\n\n
\n

Initialization and Keras model

\n
\n\n

Now both tf.train.shuffle_batch and Keras Input layer returns tensor. But the one returned by tf.train.shuffle_batch don't have metadata needed by Keras internally. As it turns out, any tensor can be easily turned into a tensor with keras metadata by calling Input layer with tensor param.

\n\n

So this takes care of initialization:

\n\n
x_train_, y_train_ = ktfr.read_and_decode('train.mnist.tfrecord', one_hot=True, n_class=nb_classes, is_train=True)\n\nx_train_batch, y_train_batch = K.tf.train.shuffle_batch([x_train_, y_train_],\n                                                batch_size=batch_size,\n                                                capacity=2000,\n                                                min_after_dequeue=1000,\n                                                num_threads=32) # set the number of threads here\n\nx_train_inp = Input(tensor=x_train_batch)\n
\n\n

Now with x_train_inp any keras model can be developed.

\n\n
\n

Training (simple)

\n
\n\n

Lets say train_out is the output tensor of your keras model. You can easily write a custom training loop on the lines of:

\n\n
loss = tf.reduce_mean(categorical_crossentropy(y_train_batch, train_out))\ntrain_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)\n\n\n# sess.run(tf.global_variables_initializer())\nsess.run(tf.initialize_all_variables())\n\nwith sess.as_default():\n    coord = tf.train.Coordinator()\n    threads = tf.train.start_queue_runners(sess=sess, coord=coord)\n\n    try:\n      step = 0\n      while not coord.should_stop():\n        start_time = time.time()\n\n        _, loss_value = sess.run([train_op, loss], feed_dict={K.learning_phase(): 0})\n\n        duration = time.time() - start_time\n\n        if step % 100 == 0:\n          print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value,\n                                                     duration))\n        step += 1\n    except tf.errors.OutOfRangeError:\n      print('Done training for %d epochs, %d steps.' % (FLAGS.num_epochs, step))\n    finally:\n      coord.request_stop()\n\n    coord.join(threads)\n    sess.close()\n
\n\n
\n

Training (keras style)

\n
\n\n

One of the features of keras that makes it so lucrative is its generalized training mechanism with the callback functions.

\n\n

But to support tfrecords type training there are several changes that are need in the fit function

\n\n\n\n

But all this can be easily supported by another flag parameter. What makes things messing are the keras features sample_weight and class_weight they are used to weigh each sample and weigh each class. For this in compile() keras creates placeholders (here) and placeholders are also implicitly created for the targets (here) which is not needed in our case the labels are already fed in by tfrecord readers. These placeholders needs to be fed in during session run which is unnecessary in our cae.

\n\n

So taking into account these changes, compile_tfrecord(here) and fit_tfrecord(here) are the extension of compile and fit and shares say 95% of the code.

\n\n

They can be used in the following way:

\n\n
import keras_tfrecord as ktfr\n\ntrain_model = Model(input=x_train_inp, output=train_out)\nktfr.compile_tfrecord(train_model, optimizer='rmsprop', loss='categorical_crossentropy', out_tensor_lst=[y_train_batch], metrics=['accuracy'])\n\ntrain_model.summary()\n\nktfr.fit_tfrecord(train_model, X_train.shape[0], batch_size, nb_epoch=3)\ntrain_model.save_weights('saved_wt.h5')\n
\n\n

You are welcome to improve on the code and pull requests.

\n", "system": "" }, { "instruction": "How does data normalization work in keras during prediction?", "input": "", "output": "

Yes - this is a really huge downside of Keras.ImageDataGenerator that you couldn't provide the standarization statistics on your own. But - there is an easy method on how to overcome this issue.

\n\n

Assuming that you have a function normalize(x) which is normalizing an image batch (remember that generator is not providing a simple image but an array of images - a batch with shape (nr_of_examples_in_batch, image_dims ..) you could make your own generator with normalization by using:

\n\n
def gen_with_norm(gen, normalize):\n    for x, y in gen:\n        yield normalize(x), y\n
\n\n

Then you might simply use gen_with_norm(datagen.flow, normalize) instead of datagen.flow.

\n\n

Moreover - you might recover the mean and std computed by a fit method by getting it from appropriate fields in datagen (e.g. datagen.mean and datagen.std).

\n", "system": "" }, { "instruction": "Why was Eigen chosen for TensorFlow?", "input": "", "output": "

I think that one of the key feature that drove the use of Eigen in the first place is because Eigen features its own highly optimized matrix product kernels whereas all other competitors have to be linked to some BLAS libraries. Moreover, the code of Eigen's product kernel is C++ with easy access to low-level internal kernels, so it was 'easy' for them to tweak and extend it to match their needs. This way Google has been able to develop the Tensor module with high CPU performance in a pure header-only fashion. The support for CUDA and now OpenCL via SyCL came later, those are not intrinsic features of Eigen that drove the initial choice.

\n", "system": "" }, { "instruction": "Scipy sparse CSR matrix to TensorFlow SparseTensor - Mini-Batch gradient descent", "input": "", "output": "

I can answer the first part of your question.

\n\n
def convert_sparse_matrix_to_sparse_tensor(X):\n    coo = X.tocoo()\n    indices = np.mat([coo.row, coo.col]).transpose()\n    return tf.SparseTensor(indices, coo.data, coo.shape)\n
\n\n

First you convert the matrix to COO format. Then you extract the indices, values, and shape and pass those directly to the SparseTensor constructor.

\n", "system": "" }, { "instruction": "EOFError: Compressed file ended before the end-of-stream marker was reached - MNIST data set", "input": "", "output": "

This is because for some reason you have an incomplete download for the MNIST dataset.

\n\n

You will have to manually delete the downloaded folder which usually resides in ~/.keras/datasets or any path specified by you relative to this path, in your case MNIST_data.

\n\n

Perform the following steps in the terminal (ctrl + alt + t):

\n\n
    \n
  1. cd ~/.keras/datasets/
  2. \n
  3. rm -rf \"dataset name\"
  4. \n
\n\n

You should be good to go!

\n", "system": "" }, { "instruction": "What's the difference between Variable and ResourceVariable in Tensorflow", "input": "", "output": "

ResourceVariable is the replacement for Variable, that aims to clean up some of the messier aspects of the semantics of Variable.

\n

ResourceVariable is the default in TF 2.0 and you very likely don't care about the differences between the two unless you are working on details deep inside the Tensorflow implementation. When eager execution is enabled tf.Variable also creates resource variables.

\n

So just use tf.Variable for now, it's almost certainly what you want; if you experience issues which look like race conditions or bugs from inconsistent values of variables in code you can try enabling resource variables (by either passing use_resource=True to your variable-creating code or calling tf.enable_resource_variables() in TF 1.x).

\n", "system": "" }, { "instruction": "Tensorflow. Converting unknown dimension size of a tensor to int", "input": "", "output": "

You have to use a Graph operation:

\n\n
a = tf.placeholder(tf.float32, shape=(None, 3072))\nb = tf.shape(a)[0]\n
\n\n

returns

\n\n
<tf.Tensor 'strided_slice:0' shape=() dtype=int32>\n
\n\n

while b = a.get_shape()[0]\nreturns

\n\n
Dimension(None)\n
\n", "system": "" }, { "instruction": "Tensorflow: why 'pip uninstall tensorflow' cannot find tensorflow", "input": "", "output": "

It could be because you didn't install Tensorflow using pip, but using python setup.py develop instead as your link shows.

\n\n

pip uninstall is likely to fail if the package is installed using python setup.py install as they do not leave behind metadata to determine what files were installed.

\n\n

Therefore, you should be able to unistall Tensorflow with the option -u or --unistall of develop

\n\n
cd /home/AIJ/tensorflow/_python_build\npython setup.py develop --uninstall\n
\n\n

To answer for the second (interestring) question about the two dist-package created under /usr/lib/python2.7 and /usr/local/lib/python2.7 it exists already a great Stack Overflow answer on the topic.

\n\n

PS: Tensorflow is a good library, you should consider not uninstall it :)

\n", "system": "" }, { "instruction": "Configuring Tensorflow to use all CPU's", "input": "", "output": "

CPUs are used via a \"device\" which is just a threadpool. You can control the number of threads if you feel like you need more:

\n\n
sess = tf.Session(config=tf.ConfigProto(\n  intra_op_parallelism_threads=NUM_THREADS))\n
\n", "system": "" }, { "instruction": "what does x = tf.placeholder(tf.float32, [None, 784]) means?", "input": "", "output": "

From the tutorial: Deep MNIST for Experts\n

\n\n
\n

Here we assign it a shape of [None, 784], where 784 is the dimensionality of a single flattened 28 by 28 pixel MNIST image, and None indicates that the first dimension, corresponding to the batch size, can be of any size.

\n
\n", "system": "" }, { "instruction": "TensorFlow strings: what they are and how to work with them", "input": "", "output": "

Unlike Python, where a string can be treated as a list of characters for the purposes of slicing and such, TensorFlow's tf.strings are indivisible values. For instance, x below is a Tensor with shape (2,) whose each element is a variable length string.

\n\n
x = tf.constant([\"This is a string\", \"This is another string\"])\n
\n\n

However, to achieve what you want, TensorFlow provides the tf.decode_raw operator. It takes a tf.string tensor as input, but can decode the string into any other primitive data type. For instance, to interpret the string as a tensor of characters, you can do the following :

\n\n
x = tf.constant(\"This is string\")\nx = tf.decode_raw(x, tf.uint8)\ny = x[:4]\nsess = tf.InteractiveSession()\nprint(y.eval())\n# prints [ 84 104 105 115]\n
\n", "system": "" }, { "instruction": "What is tf.nn.max_pool's ksize parameter used for?", "input": "", "output": "

The documentation states:

\n\n
\n

ksize: A list of ints that has length >= 4. The size of the window for each dimension of the input tensor.

\n
\n\n

In general for images, your input is of shape [batch_size, 64, 64, 3] for an RGB image of 64x64 pixels.

\n\n

The kernel size ksize will typically be [1, 2, 2, 1] if you have a 2x2 window over which you take the maximum. On the batch size dimension and the channels dimension, ksize is 1 because we don't want to take the maximum over multiple examples, or over multiples channels.

\n", "system": "" }, { "instruction": "What is a good explanation of how to read the histogram feature of TensorBoard?", "input": "", "output": "

The lines that they are talking about are described below:\n\"enter

\n\n

as for the meaning of percentile, check out the wikipedia article,\nbasically, the 93rd percentile means that 93% of the values are situated below the 93rd percentile line

\n", "system": "" }, { "instruction": "Keras : How should I prepare input data for RNN?", "input": "", "output": "

If you only want to predict the output using the most recent 5 inputs, there is no need to ever provide the full 600 time steps of any training sample. My suggestion would be to pass the training data in the following manner:

\n\n
             t=0  t=1  t=2  t=3  t=4  t=5  ...  t=598  t=599\nsample0      |---------------------|\nsample0           |---------------------|\nsample0                |-----------------\n...\nsample0                                         ----|\nsample0                                         ----------|\nsample1      |---------------------|\nsample1           |---------------------|\nsample1                |-----------------\n....\n....\nsample6751                                      ----|\nsample6751                                      ----------|\n
\n\n

The total number of training sequences will sum up to

\n\n
(600 - 4) * 6752 = 4024192    # (nb_timesteps - discarded_tailing_timesteps) * nb_samples\n
\n\n

Each training sequence consists of 5 time steps. At each time step of every sequence you pass all 13 elements of the feature vector. Subsequently, the shape of the training data will be (4024192, 5, 13).

\n\n

This loop can reshape your data:

\n\n\n\n
input = np.random.rand(6752,600,13)\nnb_timesteps = 5\n\nflag = 0\n\nfor sample in range(input.shape[0]):\n    tmp = np.array([input[sample,i:i+nb_timesteps,:] for i in range(input.shape[1] - nb_timesteps + 1)])\n\n    if flag==0:\n        new_input = tmp\n        flag = 1\n\n    else:\n        new_input = np.concatenate((new_input,tmp))\n
\n", "system": "" }, { "instruction": "How to train a RNN with LSTM cells for time series prediction", "input": "", "output": "

I'm just about to learn LSTMs in TensorFlow and try to implement an example which (luckily) tries to predict some time-series / number-series genereated by a simple math-fuction.

\n\n

But I'm using a different way to structure the data for training, motivated by Unsupervised Learning of Video Representations using LSTMs:

\n\n

LSTM Future Predictor Model

\n\n

Option 5:

\n\n
input data               label     \n1,2,3,4                  5,6,7,8\n2,3,4,5                  6,7,8,9\n3,4,5,6                  7,8,9,10\n...\n
\n\n

Beside this paper, I (tried) to take inspiration by the given TensorFlow RNN examples. My current complete solution looks like this:

\n\n
import math\nimport random\nimport numpy as np\nimport tensorflow as tf\n\nLSTM_SIZE = 64\nLSTM_LAYERS = 2\nBATCH_SIZE = 16\nNUM_T_STEPS = 4\nMAX_STEPS = 1000\nLAMBDA_REG = 5e-4\n\n\ndef ground_truth_func(i, j, t):\n    return i * math.pow(t, 2) + j\n\n\ndef get_batch(batch_size):\n    seq = np.zeros([batch_size, NUM_T_STEPS, 1], dtype=np.float32)\n    tgt = np.zeros([batch_size, NUM_T_STEPS], dtype=np.float32)\n\n    for b in xrange(batch_size):\n        i = float(random.randint(-25, 25))\n        j = float(random.randint(-100, 100))\n        for t in xrange(NUM_T_STEPS):\n            value = ground_truth_func(i, j, t)\n            seq[b, t, 0] = value\n\n        for t in xrange(NUM_T_STEPS):\n            tgt[b, t] = ground_truth_func(i, j, t + NUM_T_STEPS)\n    return seq, tgt\n\n\n# Placeholder for the inputs in a given iteration\nsequence = tf.placeholder(tf.float32, [BATCH_SIZE, NUM_T_STEPS, 1])\ntarget = tf.placeholder(tf.float32, [BATCH_SIZE, NUM_T_STEPS])\n\nfc1_weight = tf.get_variable('w1', [LSTM_SIZE, 1], initializer=tf.random_normal_initializer(mean=0.0, stddev=1.0))\nfc1_bias = tf.get_variable('b1', [1], initializer=tf.constant_initializer(0.1))\n\n# ENCODER\nwith tf.variable_scope('ENC_LSTM'):\n    lstm = tf.nn.rnn_cell.LSTMCell(LSTM_SIZE)\n    multi_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * LSTM_LAYERS)\n    initial_state = multi_lstm.zero_state(BATCH_SIZE, tf.float32)\n    state = initial_state\n    for t_step in xrange(NUM_T_STEPS):\n        if t_step > 0:\n            tf.get_variable_scope().reuse_variables()\n\n        # state value is updated after processing each batch of sequences\n        output, state = multi_lstm(sequence[:, t_step, :], state)\n\nlearned_representation = state\n\n# DECODER\nwith tf.variable_scope('DEC_LSTM'):\n    lstm = tf.nn.rnn_cell.LSTMCell(LSTM_SIZE)\n    multi_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * LSTM_LAYERS)\n    state = learned_representation\n    logits_stacked = None\n    loss = 0.0\n    for t_step in xrange(NUM_T_STEPS):\n        if t_step > 0:\n            tf.get_variable_scope().reuse_variables()\n\n        # state value is updated after processing each batch of sequences\n        output, state = multi_lstm(sequence[:, t_step, :], state)\n        # output can be used to make next number prediction\n        logits = tf.matmul(output, fc1_weight) + fc1_bias\n\n        if logits_stacked is None:\n            logits_stacked = logits\n        else:\n            logits_stacked = tf.concat(1, [logits_stacked, logits])\n\n        loss += tf.reduce_sum(tf.square(logits - target[:, t_step])) / BATCH_SIZE\n\nreg_loss = loss + LAMBDA_REG * (tf.nn.l2_loss(fc1_weight) + tf.nn.l2_loss(fc1_bias))\n\ntrain = tf.train.AdamOptimizer().minimize(reg_loss)\n\nwith tf.Session() as sess:\n    sess.run(tf.initialize_all_variables())\n\n    total_loss = 0.0\n    for step in xrange(MAX_STEPS):\n        seq_batch, target_batch = get_batch(BATCH_SIZE)\n\n        feed = {sequence: seq_batch, target: target_batch}\n        _, current_loss = sess.run([train, reg_loss], feed)\n        if step % 10 == 0:\n            print(\"@{}: {}\".format(step, current_loss))\n        total_loss += current_loss\n\n    print('Total loss:', total_loss)\n\n    print('### SIMPLE EVAL: ###')\n    seq_batch, target_batch = get_batch(BATCH_SIZE)\n    feed = {sequence: seq_batch, target: target_batch}\n    prediction = sess.run([logits_stacked], feed)\n    for b in xrange(BATCH_SIZE):\n        print(\"{} -> {})\".format(str(seq_batch[b, :, 0]), target_batch[b, :]))\n        print(\" `-> Prediction: {}\".format(prediction[0][b]))\n
\n\n

Sample output of this looks like this:

\n\n
### SIMPLE EVAL: ###\n# [input seq] -> [target prediction]\n#  `-> Prediction: [model prediction]  \n[  33.   53.  113.  213.] -> [  353.   533.   753.  1013.])\n `-> Prediction: [ 19.74548721  28.3149128   33.11489105  35.06603241]\n[ -17.  -32.  -77. -152.] -> [-257. -392. -557. -752.])\n `-> Prediction: [-16.38951683 -24.3657589  -29.49801064 -31.58583832]\n[ -7.  -4.   5.  20.] -> [  41.   68.  101.  140.])\n `-> Prediction: [ 14.14126873  22.74848557  31.29668617  36.73633194]\n...\n
\n\n

The model is a LSTM-autoencoder having 2 layers each.

\n\n

Unfortunately, as you can see in the results, this model does not learn the sequence properly. I might be the case that I'm just doing a bad mistake somewhere, or that 1000-10000 training steps is just way to few for a LSTM. As I said, I'm also just starting to understand/use LSTMs properly.\nBut hopefully this can give you some inspiration regarding the implementation.

\n", "system": "" }, { "instruction": "TensorFlow: cast a float64 tensor to float32", "input": "", "output": "

The short answer is that you can convert a tensor from tf.float64 to tf.float32 using the tf.cast() op:

\n\n
loss = tf.cast(loss, tf.float32)\n
\n\n

The longer answer is that this will not solve all of your problems with the optimizers. (The lack of support for tf.float64 is a known issue.) The optimizers require that all of the tf.Variable objects that you are trying to optimize must also have type tf.float32.

\n", "system": "" }, { "instruction": "Initializing tensorflow Variable with an array larger than 2GB", "input": "", "output": "

\nIt seems like the only option is to use a placeholder. The cleanest way I can find is to initialize to a placeholder directly:

\n\n
X_init = tf.placeholder(tf.float32, shape=(3000000, 300))\nX = tf.Variable(X_init)\n# The rest of the setup...\nsess.run(tf.initialize_all_variables(), feed_dict={X_init: model.syn0})\n
\n", "system": "" }, { "instruction": "Running TensorFlow on a Slurm Cluster?", "input": "", "output": "

It's relatively simple.

\n\n

Under the simplifying assumptions that you request one process per host, slurm will provide you with all the information you need in environment variables, specifically SLURM_PROCID, SLURM_NPROCS and SLURM_NODELIST.

\n\n

For example, you can initialize your task index, the number of tasks and the nodelist as follows:

\n\n
from hostlist import expand_hostlist\ntask_index  = int( os.environ['SLURM_PROCID'] )\nn_tasks     = int( os.environ['SLURM_NPROCS'] )\ntf_hostlist = [ (\"%s:22222\" % host) for host in\n                expand_hostlist( os.environ['SLURM_NODELIST']) ]  \n
\n\n

Note that slurm gives you a host list in its compressed format (e.g., \"myhost[11-99]\"), that you need to expand. I do that with module hostlist by \n Kent Engstr\u00f6m, available here https://pypi.python.org/pypi/python-hostlist

\n\n

At that point, you can go right ahead and create your TensorFlow cluster specification and server with the information you have available, e.g.:

\n\n
cluster = tf.train.ClusterSpec( {\"your_taskname\" : tf_hostlist } )\nserver  = tf.train.Server( cluster.as_cluster_def(),\n                           job_name   = \"your_taskname\",\n                           task_index = task_index )\n
\n\n

And you're set! You can now perform TensorFlow node placement on a specific host of your allocation with the usual syntax:

\n\n
for idx in range(n_tasks):\n   with tf.device(\"/job:your_taskname/task:%d\" % idx ):\n       ...\n
\n\n

A flaw with the code reported above is that all your jobs will instruct Tensorflow to install servers listening at fixed port 22222. If multiple such jobs happen to be scheduled to the same node, the second one will fail to listen to 22222.

\n\n

A better solution is to let slurm reserve ports for each job. You need to bring your slurm administrator on board and ask him to configure slurm so it allows you to ask for ports with the --resv-ports option. In practice, this requires asking them to add a line like the following in their slurm.conf:

\n\n
MpiParams=ports=15000-19999\n
\n\n

Before you bug your slurm admin, check what options are already configured, e.g., with:

\n\n
scontrol show config | grep MpiParams\n
\n\n

If your site already uses an old version of OpenMPI, there's a chance an option like this is already in place.

\n\n

Then, amend my first snippet of code as follows:

\n\n
from hostlist import expand_hostlist\ntask_index  = int( os.environ['SLURM_PROCID'] )\nn_tasks     = int( os.environ['SLURM_NPROCS'] )\nport        = int( os.environ['SLURM_STEP_RESV_PORTS'].split('-')[0] )\ntf_hostlist = [ (\"%s:%s\" % (host,port)) for host in\n                expand_hostlist( os.environ['SLURM_NODELIST']) ]  \n
\n\n

Good luck!

\n", "system": "" } ]