diff --git "a/data.json" "b/data.json" new file mode 100644--- /dev/null +++ "b/data.json" @@ -0,0 +1,1658 @@ +[ + { + "instruction": "Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2", + "input": "", + "output": "

What is this warning about?

\n

Modern CPUs provide a lot of low-level instructions, besides the usual arithmetic and logic, known as extensions, e.g. SSE2, SSE4, AVX, etc. From the Wikipedia:

\n
\n

Advanced Vector Extensions (AVX) are extensions to the x86 instruction\nset architecture for microprocessors from Intel and AMD proposed by\nIntel in March 2008 and first supported by Intel with the Sandy\nBridge processor shipping in Q1 2011 and later on by AMD with the\nBulldozer processor shipping in Q3 2011. AVX provides new features,\nnew instructions and a new coding scheme.

\n
\n

In particular, AVX introduces fused multiply-accumulate (FMA) operations, which speed up linear algebra computation, namely dot-product, matrix multiply, convolution, etc. Almost every machine-learning training involves a great deal of these operations, hence will be faster on a CPU that supports AVX and FMA (up to 300%). The warning states that your CPU does support AVX (hooray!).

\n

I'd like to stress here: it's all about CPU only.

\n

Why isn't it used then?

\n

Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible. Another argument is that even with these extensions CPU is a lot slower than a GPU, and it's expected for medium- and large-scale machine-learning training to be performed on a GPU.

\n

What should you do?

\n

If you have a GPU, you shouldn't care about AVX support, because most expensive ops will be dispatched on a GPU device (unless explicitly set not to). In this case, you can simply ignore this warning by

\n
# Just disables the warning, doesn't take advantage of AVX/FMA to run faster\nimport os\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'\n
\n

... or by setting export TF_CPP_MIN_LOG_LEVEL=2 if you're on Unix. Tensorflow is working fine anyway, but you won't see these annoying warnings.

\n
\n

If you don't have a GPU and want to utilize CPU as much as possible, you should build tensorflow from the source optimized for your CPU with AVX, AVX2, and FMA enabled if your CPU supports them. It's been discussed in this question and also this GitHub issue. Tensorflow uses an ad-hoc build system called bazel and building it is not that trivial, but is certainly doable. After this, not only will the warning disappear, tensorflow performance should also improve.

\n", + "system": "" + }, + { + "instruction": "TensorFlow not found using pip", + "input": "", + "output": "

I found this to finally work.

\n\n
python3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.12.0-py3-none-any.whl\n
\n\n

Edit 1: This was tested on Windows (8, 8.1, 10), Mac and Linux. Change python3 to python according to your configuration. Change py3 to py2 in the url if you are using Python 2.x.

\n\n

Edit 2: A list of different versions if someone needs: https://storage.googleapis.com/tensorflow

\n\n

Edit 3: A list of urls for the available wheel packages is available here:\nhttps://www.tensorflow.org/install/pip#package-location

\n", + "system": "" + }, + { + "instruction": "How to save/restore a model after training?", + "input": "", + "output": "

In (and after) Tensorflow version 0.11:

\n

Save the model:

\n
import tensorflow as tf\n\n#Prepare to feed input, i.e. feed_dict and placeholders\nw1 = tf.placeholder("float", name="w1")\nw2 = tf.placeholder("float", name="w2")\nb1= tf.Variable(2.0,name="bias")\nfeed_dict ={w1:4,w2:8}\n\n#Define a test operation that we will restore\nw3 = tf.add(w1,w2)\nw4 = tf.multiply(w3,b1,name="op_to_restore")\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())\n\n#Create a saver object which will save all the variables\nsaver = tf.train.Saver()\n\n#Run the operation by feeding input\nprint sess.run(w4,feed_dict)\n#Prints 24 which is sum of (w1+w2)*b1 \n\n#Now, save the graph\nsaver.save(sess, 'my_test_model',global_step=1000)\n
\n

Restore the model:

\n
import tensorflow as tf\n\nsess=tf.Session()    \n#First let's load meta graph and restore weights\nsaver = tf.train.import_meta_graph('my_test_model-1000.meta')\nsaver.restore(sess,tf.train.latest_checkpoint('./'))\n\n\n# Access saved Variables directly\nprint(sess.run('bias:0'))\n# This will print 2, which is the value of bias that we saved\n\n\n# Now, let's access and create placeholders variables and\n# create feed-dict to feed new data\n\ngraph = tf.get_default_graph()\nw1 = graph.get_tensor_by_name("w1:0")\nw2 = graph.get_tensor_by_name("w2:0")\nfeed_dict ={w1:13.0,w2:17.0}\n\n#Now, access the op that you want to run. \nop_to_restore = graph.get_tensor_by_name("op_to_restore:0")\n\nprint sess.run(op_to_restore,feed_dict)\n#This will print 60 which is calculated \n
\n

This and some more advanced use-cases have been explained very well here.

\n

A quick complete tutorial to save and restore Tensorflow models

\n", + "system": "" + }, + { + "instruction": "What are logits? What is the difference between softmax and softmax_cross_entropy_with_logits?", + "input": "", + "output": "

The softmax+logits simply means that the function operates on the unscaled output of earlier layers and that the relative scale to understand the units is linear. It means, in particular, the sum of the inputs may not equal 1, that the values are not probabilities (you might have an input of 5). Internally, it first applies softmax to the unscaled output, and then and then computes the cross entropy of those values vs. what they "should" be as defined by the labels.

\n

tf.nn.softmax produces the result of applying the softmax function to an input tensor. The softmax "squishes" the inputs so that sum(input) = 1, and it does the mapping by interpreting the inputs as log-probabilities (logits) and then converting them back into raw probabilities between 0 and 1. The shape of output of a softmax is the same as the input:

\n
a = tf.constant(np.array([[.1, .3, .5, .9]]))\nprint s.run(tf.nn.softmax(a))\n[[ 0.16838508  0.205666    0.25120102  0.37474789]]\n
\n

See this answer for more about why softmax is used extensively in DNNs.

\n

tf.nn.softmax_cross_entropy_with_logits combines the softmax step with the calculation of the cross-entropy loss after applying the softmax function, but it does it all together in a more mathematically careful way. It's similar to the result of:

\n
sm = tf.nn.softmax(x)\nce = cross_entropy(sm)\n
\n

The cross entropy is a summary metric: it sums across the elements. The output of tf.nn.softmax_cross_entropy_with_logits on a shape [2,5] tensor is of shape [2,1] (the first dimension is treated as the batch).

\n

If you want to do optimization to minimize the cross entropy AND you're softmaxing after your last layer, you should use tf.nn.softmax_cross_entropy_with_logits instead of doing it yourself, because it covers numerically unstable corner cases in the mathematically right way. Otherwise, you'll end up hacking it by adding little epsilons here and there.

\n

Edited 2016-02-07:\nIf you have single-class labels, where an object can only belong to one class, you might now consider using tf.nn.sparse_softmax_cross_entropy_with_logits so that you don't have to convert your labels to a dense one-hot array. This function was added after release 0.6.0.

\n", + "system": "" + }, + { + "instruction": "What is the meaning of the word logits in TensorFlow?", + "input": "", + "output": "

Logits is an overloaded term which can mean many different things:

\n\n
\n\n

In Math, Logit is a function that maps probabilities ([0, 1]) to R ((-inf, inf))

\n\n

\"enter

\n\n

Probability of 0.5 corresponds to a logit of 0. Negative logit correspond to probabilities less than 0.5, positive to > 0.5.

\n\n

In ML, it can be

\n\n
\n

the vector of raw (non-normalized) predictions that a classification\n model generates, which is ordinarily then passed to a normalization\n function. If the model is solving a multi-class classification\n problem, logits typically become an input to the softmax function. The\n softmax function then generates a vector of (normalized) probabilities\n with one value for each possible class.

\n
\n\n

Logits also sometimes refer to the element-wise inverse of the sigmoid function.

\n", + "system": "" + }, + { + "instruction": "How to tell if tensorflow is using gpu acceleration from inside python shell?", + "input": "", + "output": "

No, I don't think "open CUDA library" is enough to tell, because different nodes of the graph may be on different devices.

\n

When using tensorflow2:

\n
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))\n
\n

For tensorflow1, to find out which device is used, you can enable log device placement like this:

\n
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))\n
\n

Check your console for this type of output.

\n

\"\"

\n", + "system": "" + }, + { + "instruction": "What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?", + "input": "", + "output": "

If you like ascii art:

\n\n\n\n

In this example:

\n\n\n\n

Notes:

\n\n\n\n

Edit:

\n\n

About the name:

\n\n\n", + "system": "" + }, + { + "instruction": "How to find which version of TensorFlow is installed in my system?", + "input": "", + "output": "

This depends on how you installed TensorFlow. I am going to use the same headings used by TensorFlow's installation instructions to structure this answer.

\n\n
\n\n

Pip installation

\n\n

Run:

\n\n
python -c 'import tensorflow as tf; print(tf.__version__)'  # for Python 2\npython3 -c 'import tensorflow as tf; print(tf.__version__)'  # for Python 3\n
\n\n

Note that python is symlinked to /usr/bin/python3 in some Linux distributions, so use python instead of python3 in these cases.

\n\n

pip list | grep tensorflow for Python 2 or pip3 list | grep tensorflow for Python 3 will also show the version of Tensorflow installed.

\n\n
\n\n

Virtualenv installation

\n\n

Run:

\n\n
python -c 'import tensorflow as tf; print(tf.__version__)'  # for both Python 2 and Python 3\n
\n\n

pip list | grep tensorflow will also show the version of Tensorflow installed.

\n\n

For example, I have installed TensorFlow 0.9.0 in a virtualenv for Python 3. So, I get:

\n\n
$ python -c 'import tensorflow as tf; print(tf.__version__)'\n0.9.0\n\n$ pip list | grep tensorflow\ntensorflow (0.9.0)\n
\n", + "system": "" + }, + { + "instruction": "Could not find a version that satisfies the requirement tensorflow", + "input": "", + "output": "

The latest requirements for running TensorFlow are documented in the installation documentation.

\n\n

So, if you're using an out-of-range version of Python (older or newer) or a 32-bit version, then you'll need to use a different version.

\n", + "system": "" + }, + { + "instruction": "How to prevent tensorflow from allocating the totality of a GPU memory?", + "input": "", + "output": "

You can set the fraction of GPU memory to be allocated when you construct a tf.Session by passing a tf.GPUOptions as part of the optional config argument:

\n\n
# Assume that you have 12GB of GPU memory and want to allocate ~4GB:\ngpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)\n\nsess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))\n
\n\n

The per_process_gpu_memory_fraction acts as a hard upper bound on the amount of GPU memory that will be used by the process on each GPU on the same machine. Currently, this fraction is applied uniformly to all of the GPUs on the same machine; there is no way to set this on a per-GPU basis.

\n", + "system": "" + }, + { + "instruction": "Disable Tensorflow debugging information", + "input": "", + "output": "

You can disable all debugging logs using os.environ :

\n\n
import os\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' \nimport tensorflow as tf\n
\n\n

Tested on tf 0.12 and 1.0

\n\n

In details,

\n\n
0 = all messages are logged (default behavior)\n1 = INFO messages are not printed\n2 = INFO and WARNING messages are not printed\n3 = INFO, WARNING, and ERROR messages are not printed\n
\n", + "system": "" + }, + { + "instruction": "Convert a tensor to numpy array in Tensorflow?", + "input": "", + "output": "

TensorFlow 2.x

\n

Eager Execution is enabled by default, so just call .numpy() on the Tensor object.

\n
import tensorflow as tf\n\na = tf.constant([[1, 2], [3, 4]])                 \nb = tf.add(a, 1)\n\na.numpy()\n# array([[1, 2],\n#        [3, 4]], dtype=int32)\n\nb.numpy()\n# array([[2, 3],\n#        [4, 5]], dtype=int32)\n\ntf.multiply(a, b).numpy()\n# array([[ 2,  6],\n#        [12, 20]], dtype=int32)\n
\n

See NumPy Compatibility for more. It is worth noting (from the docs),

\n
\n

Numpy array may share a memory with the Tensor object. Any changes to one may be reflected in the other.

\n
\n

Bold emphasis mine. A copy may or may not be returned, and this is an implementation detail based on whether the data is in CPU or GPU (in the latter case, a copy has to be made from GPU to host memory).

\n

But why am I getting the AttributeError: 'Tensor' object has no attribute 'numpy'?.
\nA lot of folks have commented about this issue, there are a couple of possible reasons:

\n\n
\n

If Eager Execution is disabled, you can build a graph and then run it through tf.compat.v1.Session:

\n
a = tf.constant([[1, 2], [3, 4]])                 \nb = tf.add(a, 1)\nout = tf.multiply(a, b)\n\nout.eval(session=tf.compat.v1.Session())    \n# array([[ 2,  6],\n#        [12, 20]], dtype=int32)
\n

See also TF 2.0 Symbols Map for a mapping of the old API to the new one.

\n", + "system": "" + }, + { + "instruction": "Which TensorFlow and CUDA version combinations are compatible?", + "input": "", + "output": "

TL;DR) See this table: https://www.tensorflow.org/install/source#gpu

\n

Generally:

\n

Check the CUDA version:

\n
cat /usr/local/cuda/version.txt\n
\n

and cuDNN version:

\n
grep CUDNN_MAJOR -A 2 /usr/local/cuda/include/cudnn.h\n
\n

and install a combination as given below in the images or here.

\n

The following images and the link provide an overview of the officially supported/tested combinations of CUDA and TensorFlow on Linux, macOS and Windows:

\n

Minor configurations:

\n

Since the given specifications below in some cases might be too broad, here is one specific configuration that works:

\n\n

The corresponding cudnn can be downloaded here.

\n

Tested build configurations

\n

Please refer to https://www.tensorflow.org/install/source#gpu for a up-to-date compatibility chart (for official TF wheels).

\n

(figures updated May 20, 2020)

\n

Linux GPU

\n

\"enter

\n

Linux CPU

\n

\"enter

\n

macOS GPU

\n

\"enter

\n

macOS CPU

\n

\"enter

\n

Windows GPU

\n

\"enter

\n

Windows CPU

\n

\"enter

\n

Updated as of Dec 5 2020: For the updated information please refer Link for Linux and Link for Windows.

\n", + "system": "" + }, + { + "instruction": "How to compile Tensorflow with SSE4.2 and AVX instructions?", + "input": "", + "output": "

I just ran into this same problem, it seems like Yaroslav Bulatov's suggestion doesn't cover SSE4.2 support, adding --copt=-msse4.2 would suffice. In the end, I successfully built with

\n\n
bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=cuda -k //tensorflow/tools/pip_package:build_pip_package\n
\n\n

without getting any warning or errors.

\n\n

Probably the best choice for any system is:

\n\n
bazel build -c opt --copt=-march=native --copt=-mfpmath=both --config=cuda -k //tensorflow/tools/pip_package:build_pip_package\n
\n\n

(Update: the build scripts may be eating -march=native, possibly because it contains an =.)

\n\n

-mfpmath=both only works with gcc, not clang. -mfpmath=sse is probably just as good, if not better, and is the default for x86-64. 32-bit builds default to -mfpmath=387, so changing that will help for 32-bit. (But if you want high-performance for number crunching, you should build 64-bit binaries.)

\n\n

I'm not sure what TensorFlow's default for -O2 or -O3 is. gcc -O3 enables full optimization including auto-vectorization, but that sometimes can make code slower.

\n\n
\n\n

What this does: --copt for bazel build passes an option directly to gcc for compiling C and C++ files (but not linking, so you need a different option for cross-file link-time-optimization)

\n\n

x86-64 gcc defaults to using only SSE2 or older SIMD instructions, so you can run the binaries on any x86-64 system. (See https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html). That's not what you want. You want to make a binary that takes advantage of all the instructions your CPU can run, because you're only running this binary on the system where you built it.

\n\n

-march=native enables all the options your CPU supports, so it makes -mavx512f -mavx2 -mavx -mfma -msse4.2 redundant. (Also, -mavx2 already enables -mavx and -msse4.2, so Yaroslav's command should have been fine). Also if you're using a CPU that doesn't support one of these options (like FMA), using -mfma would make a binary that faults with illegal instructions.

\n\n

TensorFlow's ./configure defaults to enabling -march=native, so using that should avoid needing to specify compiler options manually.

\n\n

-march=native enables -mtune=native, so it optimizes for your CPU for things like which sequence of AVX instructions is best for unaligned loads.

\n\n

This all applies to gcc, clang, or ICC. (For ICC, you can use -xHOST instead of -march=native.)

\n", + "system": "" + }, + { + "instruction": "What's the difference between tf.placeholder and tf.Variable?", + "input": "", + "output": "

In short, you use tf.Variable for trainable variables such as weights (W) and biases (B) for your model.

\n\n
weights = tf.Variable(\n    tf.truncated_normal([IMAGE_PIXELS, hidden1_units],\n                    stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))), name='weights')\n\nbiases = tf.Variable(tf.zeros([hidden1_units]), name='biases')\n
\n\n

tf.placeholder is used to feed actual training examples.

\n\n
images_placeholder = tf.placeholder(tf.float32, shape=(batch_size, IMAGE_PIXELS))\nlabels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))\n
\n\n

This is how you feed the training examples during the training:

\n\n
for step in xrange(FLAGS.max_steps):\n    feed_dict = {\n       images_placeholder: images_feed,\n       labels_placeholder: labels_feed,\n     }\n    _, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)\n
\n\n

Your tf.variables will be trained (modified) as the result of this training.

\n\n

See more at https://www.tensorflow.org/versions/r0.7/tutorials/mnist/tf/index.html. (Examples are taken from the web page.)

\n", + "system": "" + }, + { + "instruction": "How to print the value of a Tensor object in TensorFlow?", + "input": "", + "output": "

The easiest[A] way to evaluate the actual value of a Tensor object is to pass it to the Session.run() method, or call Tensor.eval() when you have a default session (i.e. in a with tf.Session(): block, or see below). In general[B], you cannot print the value of a tensor without running some code in a session.

\n\n

If you are experimenting with the programming model, and want an easy way to evaluate tensors, the tf.InteractiveSession lets you open a session at the start of your program, and then use that session for all Tensor.eval() (and Operation.run()) calls. This can be easier in an interactive setting, such as the shell or an IPython notebook, when it's tedious to pass around a Session object everywhere. For example, the following works in a Jupyter notebook:

\n\n
with tf.Session() as sess:  print(product.eval()) \n
\n\n

This might seem silly for such a small expression, but one of the key ideas in Tensorflow 1.x is deferred execution: it's very cheap to build a large and complex expression, and when you want to evaluate it, the back-end (to which you connect with a Session) is able to schedule its execution more efficiently (e.g. executing independent parts in parallel and using GPUs).

\n\n
\n\n

[A]: To print the value of a tensor without returning it to your Python program, you can use the tf.print() operator, as Andrzej suggests in another answer. According to the official documentation:

\n\n
\n

To make sure the operator runs, users need to pass the produced op to tf.compat.v1.Session's run method, or to use the op as a control dependency for executed ops by specifying with tf.compat.v1.control_dependencies([print_op]), which is printed to standard output.

\n
\n\n

Also note that:

\n\n
\n

In Jupyter notebooks and colabs, tf.print prints to the notebook cell outputs. It will not write to the notebook kernel's console logs.

\n
\n\n

[B]: You might be able to use the tf.get_static_value() function to get the constant value of the given tensor if its value is efficiently calculable.

\n", + "system": "" + }, + { + "instruction": "Could not install packages due to an EnvironmentError: [WinError 5] Access is denied:", + "input": "", + "output": "

Just type the command you want execute with the user permission, if you don't want to change the permission:

\n\n
pip3 install --upgrade tensorflow-gpu --user\n
\n", + "system": "" + }, + { + "instruction": "What's the difference of name scope and a variable scope in tensorflow?", + "input": "", + "output": "

Let's begin by a short introduction to variable sharing. It is a mechanism in TensorFlow that allows for sharing variables accessed in different parts of the code without passing references to the variable around.

\n\n

The method tf.get_variable can be used with the name of the variable as the argument to either create a new variable with such name or retrieve the one that was created before. This is different from using the tf.Variable constructor which will create a new variable every time it is called (and potentially add a suffix to the variable name if a variable with such name already exists).

\n\n

It is for the purpose of the variable sharing mechanism that a separate type of scope (variable scope) was introduced.

\n\n

As a result, we end up having two different types of scopes:

\n\n\n\n

Both scopes have the same effect on all operations as well as variables created using tf.Variable, i.e., the scope will be added as a prefix to the operation or variable name.

\n\n

However, name scope is ignored by tf.get_variable. We can see that in the following example:

\n\n
with tf.name_scope(\"my_scope\"):\n    v1 = tf.get_variable(\"var1\", [1], dtype=tf.float32)\n    v2 = tf.Variable(1, name=\"var2\", dtype=tf.float32)\n    a = tf.add(v1, v2)\n\nprint(v1.name)  # var1:0\nprint(v2.name)  # my_scope/var2:0\nprint(a.name)   # my_scope/Add:0\n
\n\n

The only way to place a variable accessed using tf.get_variable in a scope is to use a variable scope, as in the following example:

\n\n
with tf.variable_scope(\"my_scope\"):\n    v1 = tf.get_variable(\"var1\", [1], dtype=tf.float32)\n    v2 = tf.Variable(1, name=\"var2\", dtype=tf.float32)\n    a = tf.add(v1, v2)\n\nprint(v1.name)  # my_scope/var1:0\nprint(v2.name)  # my_scope/var2:0\nprint(a.name)   # my_scope/Add:0\n
\n\n

This allows us to easily share variables across different parts of the program, even within different name scopes:

\n\n
with tf.name_scope(\"foo\"):\n    with tf.variable_scope(\"var_scope\"):\n        v = tf.get_variable(\"var\", [1])\nwith tf.name_scope(\"bar\"):\n    with tf.variable_scope(\"var_scope\", reuse=True):\n        v1 = tf.get_variable(\"var\", [1])\nassert v1 == v\nprint(v.name)   # var_scope/var:0\nprint(v1.name)  # var_scope/var:0\n
\n\n
\n\n

UPDATE

\n\n

As of version r0.11, op_scope and variable_op_scope are both deprecated and replaced by name_scope and variable_scope.

\n", + "system": "" + }, + { + "instruction": "Ordering of batch normalization and dropout?", + "input": "", + "output": "

In the Ioffe and Szegedy 2015, the authors state that \"we would like to ensure that for any parameter values, the network always produces activations with the desired distribution\". So the Batch Normalization Layer is actually inserted right after a Conv Layer/Fully Connected Layer, but before feeding into ReLu (or any other kinds of) activation. See this video at around time 53 min for more details.

\n\n

As far as dropout goes, I believe dropout is applied after activation layer. In the dropout paper figure 3b, the dropout factor/probability matrix r(l) for hidden layer l is applied to it on y(l), where y(l) is the result after applying activation function f.

\n\n

So in summary, the order of using batch normalization and dropout is:

\n\n

-> CONV/FC -> BatchNorm -> ReLu(or other activation) -> Dropout -> CONV/FC ->

\n", + "system": "" + }, + { + "instruction": "How to get current available GPUs in tensorflow?", + "input": "", + "output": "

There is an undocumented method called device_lib.list_local_devices() that enables you to list the devices available in the local process. (N.B. As an undocumented method, this is subject to backwards incompatible changes.) The function returns a list of DeviceAttributes protocol buffer objects. You can extract a list of string device names for the GPU devices as follows:

\n\n
from tensorflow.python.client import device_lib\n\ndef get_available_gpus():\n    local_device_protos = device_lib.list_local_devices()\n    return [x.name for x in local_device_protos if x.device_type == 'GPU']\n
\n\n

Note that (at least up to TensorFlow 1.4), calling device_lib.list_local_devices() will run some initialization code that, by default, will allocate all of the GPU memory on all of the devices (GitHub issue). To avoid this, first create a session with an explicitly small per_process_gpu_fraction, or allow_growth=True, to prevent all of the memory being allocated. See this question for more details.

\n", + "system": "" + }, + { + "instruction": "Tensorflow 2.0 - AttributeError: module 'tensorflow' has no attribute 'Session'", + "input": "", + "output": "

According to TF 1:1 Symbols Map, in TF 2.0 you should use tf.compat.v1.Session() instead of tf.Session()

\n\n

https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0

\n\n

To get TF 1.x like behaviour in TF 2.0 one can run

\n\n
import tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\n
\n\n

but then one cannot benefit of many improvements made in TF 2.0. For more details please refer to the migration guide \nhttps://www.tensorflow.org/guide/migrate

\n", + "system": "" + }, + { + "instruction": "Keras, How to get the output of each layer?", + "input": "", + "output": "

You can easily get the outputs of any layer by using: model.layers[index].output

\n\n

For all layers use this:

\n\n
from keras import backend as K\n\ninp = model.input                                           # input placeholder\noutputs = [layer.output for layer in model.layers]          # all layer outputs\nfunctors = [K.function([inp, K.learning_phase()], [out]) for out in outputs]    # evaluation functions\n\n# Testing\ntest = np.random.random(input_shape)[np.newaxis,...]\nlayer_outs = [func([test, 1.]) for func in functors]\nprint layer_outs\n
\n\n

Note: To simulate Dropout use learning_phase as 1. in layer_outs otherwise use 0.

\n\n

Edit: (based on comments)

\n\n

K.function creates theano/tensorflow tensor functions which is later used to get the output from the symbolic graph given the input.

\n\n

Now K.learning_phase() is required as an input as many Keras layers like Dropout/Batchnomalization depend on it to change behavior during training and test time.

\n\n

So if you remove the dropout layer in your code you can simply use:

\n\n
from keras import backend as K\n\ninp = model.input                                           # input placeholder\noutputs = [layer.output for layer in model.layers]          # all layer outputs\nfunctors = [K.function([inp], [out]) for out in outputs]    # evaluation functions\n\n# Testing\ntest = np.random.random(input_shape)[np.newaxis,...]\nlayer_outs = [func([test]) for func in functors]\nprint layer_outs\n
\n\n

Edit 2: More optimized

\n\n

I just realized that the previous answer is not that optimized as for each function evaluation the data will be transferred CPU->GPU memory and also the tensor calculations needs to be done for the lower layers over-n-over.

\n\n

Instead this is a much better way as you don't need multiple functions but a single function giving you the list of all outputs:

\n\n
from keras import backend as K\n\ninp = model.input                                           # input placeholder\noutputs = [layer.output for layer in model.layers]          # all layer outputs\nfunctor = K.function([inp, K.learning_phase()], outputs )   # evaluation function\n\n# Testing\ntest = np.random.random(input_shape)[np.newaxis,...]\nlayer_outs = functor([test, 1.])\nprint layer_outs\n
\n", + "system": "" + }, + { + "instruction": "In TensorFlow, what is the difference between Session.run() and Tensor.eval()?", + "input": "", + "output": "

If you have a Tensor t, calling t.eval() is equivalent to calling tf.get_default_session().run(t).

\n\n

You can make a session the default as follows:

\n\n
t = tf.constant(42.0)\nsess = tf.Session()\nwith sess.as_default():   # or `with sess:` to close on exit\n    assert sess is tf.get_default_session()\n    assert t.eval() == sess.run(t)\n
\n\n

The most important difference is that you can use sess.run() to fetch the values of many tensors in the same step:

\n\n
t = tf.constant(42.0)\nu = tf.constant(37.0)\ntu = tf.mul(t, u)\nut = tf.mul(u, t)\nwith sess.as_default():\n   tu.eval()  # runs one step\n   ut.eval()  # runs one step\n   sess.run([tu, ut])  # evaluates both tensors in a single step\n
\n\n

Note that each call to eval and run will execute the whole graph from scratch. To cache the result of a computation, assign it to a tf.Variable.

\n", + "system": "" + }, + { + "instruction": "How can I run Tensorboard on a remote server?", + "input": "", + "output": "

Here is what I do to avoid the issues of making the remote server accept your local external IP:

\n\n\n\n

What it does is that everything on the port 6006 of the server (in 127.0.0.1:6006) will be forwarded to my machine on the port 16006.

\n\n
\n\n\n", + "system": "" + }, + { + "instruction": "How to build and use Google TensorFlow C++ api", + "input": "", + "output": "

To get started, you should download the source code from Github, by following the instructions here (you'll need Bazel and a recent version of GCC).

\n\n

The C++ API (and the backend of the system) is in tensorflow/core. Right now, only the C++ Session interface, and the C API are being supported. You can use either of these to execute TensorFlow graphs that have been built using the Python API and serialized to a GraphDef protocol buffer. There is also an experimental feature for building graphs in C++, but this is currently not quite as full-featured as the Python API (e.g. no support for auto-differentiation at present). You can see an example program that builds a small graph in C++ here.

\n\n

The second part of the C++ API is the API for adding a new OpKernel, which is the class containing implementations of numerical kernels for CPU and GPU. There are numerous examples of how to build these in tensorflow/core/kernels, as well as a tutorial for adding a new op in C++.

\n", + "system": "" + }, + { + "instruction": "TypeError: Descriptors cannot not be created directly", + "input": "", + "output": "

Sometimes the protobuf package might be installed without your involvement. For this, you have two solutions to apply. Try one of the below solutions and it should work.

\n

Solution 1:

\n

You can downgrade the protobuf plugin,

\n
pip install protobuf==3.20.*\n
\n

Or you can add it to the requirements.txt file as the last package. Because this will override the previously installed protobuf package.

\n
...\nprotobuf==3.20.*\n
\n

Solution 2:

\n

You can set the following environment variable.

\n
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python\n
\n

according to the error description, this might impact your program performance.

\n
\n

but this will use pure-Python parsing and will be much slower

\n
\n
\n

References:

\n\n", + "system": "" + }, + { + "instruction": "What is the difference between steps and epochs in TensorFlow?", + "input": "", + "output": "

A training step is one gradient update. In one step batch_size examples are processed.

\n

An epoch consists of one full cycle through the training data. This is usually many steps. As an example, if you have 2,000 images and use a batch size of 10 an epoch consists of:

\n
2,000 images / (10 images / step) = 200 steps.\n
\n

If you choose your training image randomly (and independently) in each step, you normally do not call it epoch. [This is where my answer differs from the previous one. Also see my comment.]

\n", + "system": "" + }, + { + "instruction": "How to run Tensorflow on CPU", + "input": "", + "output": "

You can also set the environment variable to

\n\n
CUDA_VISIBLE_DEVICES=\"\"\n
\n\n

without having to modify the source code.

\n", + "system": "" + }, + { + "instruction": "Why is TensorFlow 2 much slower than TensorFlow 1?", + "input": "", + "output": "

UPDATE 8/1730/2020: TF 2.3 has finally done it: all cases run as fast, or notably faster, than any previous version.

\n

Further, my previous update was unfair to TF; my GPU was to blame, has been overheating lately. If you see a rising stem plot of iteration times, it's a reliable symptom. Lastly, see a dev's note on Eager vs Graph.

\n

This might be my last update on this answer. The true stats on your model's speed can only be found by you, on your device.

\n
\n

UPDATE 5/19/2020: TF 2.2, using same tests: only a minor improvement in Eager speed. Plots for Large-Large Numpy train_on_batch case below, x-axis is successive fit iterations; my GPU isn't near its full capacity, so doubt it's throttling, but iterations do get slower over time.

\n

\"enter

\n

Per above, Graph and Eager are 1.56x and 1.97x slower than their TF1 counterparts, respectively. Unsure I'll debug this further, as I'm considering switching to Pytorch per TensorFlow's poor support for custom / low-level functionality. I did, however, open an Issue to get devs' feedback.

\n
\n

UPDATE 2/18/2020: I've benched 2.1 and 2.1-nightly; the results are mixed. All but one configs (model & data size) are as fast as or much faster than the best of TF2 & TF1. The one that's slower, and slower dramatically, is Large-Large - esp. in Graph execution (1.6x to 2.5x slower).

\n

Furthermore, there are extreme reproducibility differences between Graph and Eager for a large model I tested - one not explainable via randomness/compute-parallelism. I can't currently present reproducible code for these claims per time constraints, so instead I strongly recommend testing this for your own models.

\n

Haven't opened a Git issue on these yet, but I did comment on the original - no response yet. I'll update the answer(s) once progress is made.

\n
\n

VERDICT: it isn't, IF you know what you're doing. But if you don't, it could cost you, lots - by a few GPU upgrades on average, and by multiple GPUs worst-case.

\n
\n

THIS ANSWER: aims to provide a high-level description of the issue, as well as guidelines for how to decide on the training configuration specific to your needs. For a detailed, low-level description, which includes all benchmarking results + code used, see my other answer.

\n

I'll be updating my answer(s) w/ more info if I learn any - can bookmark / "star" this question for reference.

\n
\n

ISSUE SUMMARY: as confirmed by a TensorFlow developer, Q. Scott Zhu, TF2 focused development on Eager execution & tight integration w/ Keras, which involved sweeping changes in TF source - including at graph-level. Benefits: greatly expanded processing, distribution, debug, and deployment capabilities. The cost of some of these, however, is speed.

\n

The matter, however, is fairly more complex. It isn't just TF1 vs. TF2 - factors yielding significant differences in train speed include:

\n
    \n
  1. TF2 vs. TF1
  2. \n
  3. Eager vs. Graph mode
  4. \n
  5. keras vs. tf.keras
  6. \n
  7. numpy vs. tf.data.Dataset vs. ...
  8. \n
  9. train_on_batch() vs. fit()
  10. \n
  11. GPU vs. CPU
  12. \n
  13. model(x) vs. model.predict(x) vs. ...
  14. \n
\n

Unfortunately, almost none of the above are independent of the other, and each can at least double execution time relative to another. Fortunately, you can determine what'll work best systematically, and with a few shortcuts - as I'll be showing.

\n
\n

WHAT SHOULD I DO? Currently, the only way is - experiment for your specific model, data, and hardware. No single configuration will always work best - but there are do's and don't's to simplify your search:

\n

>> DO:

\n\n

>> DON'T:

\n\n

Refer to code at bottom of my other answer for an example benchmarking setup. The list above is based mainly on the "BENCHMARKS" tables in the other answer.

\n
\n

LIMITATIONS of the above DO's & DON'T's:

\n\n
\n

Why did TF2 sacrifice the most practical quality, speed, for eager execution? It hasn't, clearly - graph is still available. But if the question is "why eager at all":

\n\n
\n

HOW TO ENABLE/DISABLE EAGER?

\n
tf.enable_eager_execution()  # TF1; must be done before any model/tensor creation\ntf.compat.v1.disable_eager_execution() # TF2; above holds\n
\n

Misleading in TF2; see here.

\n
\n

ADDITIONAL INFO:

\n\n
\n

REQUESTS TO TENSORFLOW DEVS:

\n
    \n
  1. Please fix train_on_batch(), and the performance aspect of calling fit() iteratively; custom train loops are important to many, especially to me.
  2. \n
  3. Add documentation / docstring mention of these performance differences for users' knowledge.
  4. \n
  5. Improve general execution speed to keep peeps from hopping to Pytorch.
  6. \n
\n
\n

ACKNOWLEDGEMENTS: Thanks to

\n\n
\n

UPDATES:

\n\n", + "system": "" + }, + { + "instruction": "What is the role of "Flatten" in Keras?", + "input": "", + "output": "

If you read the Keras documentation entry for Dense, you will see that this call:

\n
Dense(16, input_shape=(5,3))\n
\n

would result in a Dense network with 3 inputs and 16 outputs which would be applied independently for each of 5 steps. So, if D(x) transforms 3 dimensional vector to 16-d vector, what you'll get as output from your layer would be a sequence of vectors: [D(x[0,:]), D(x[1,:]),..., D(x[4,:])] with shape (5, 16). In order to have the behavior you specify you may first Flatten your input to a 15-d vector and then apply Dense:

\n
model = Sequential()\nmodel.add(Flatten(input_shape=(3, 2)))\nmodel.add(Dense(16))\nmodel.add(Activation('relu'))\nmodel.add(Dense(4))\nmodel.compile(loss='mean_squared_error', optimizer='SGD')\n
\n

EDIT:\nAs some people struggled to understand - here you have an explaining image:

\n

\"enter

\n", + "system": "" + }, + { + "instruction": "TensorFlow, why was python the chosen language?", + "input": "", + "output": "

The most important thing to realize about TensorFlow is that, for the most part, the core is not written in Python: It's written in a combination of highly-optimized C++ and CUDA (Nvidia's language for programming GPUs). Much of that happens, in turn, by using Eigen (a high-performance C++ and CUDA numerical library) and NVidia's cuDNN (a very optimized DNN library for NVidia GPUs, for functions such as convolutions).

\n\n

The model for TensorFlow is that the programmer uses \"some language\" (most likely Python!) to express the model. This model, written in the TensorFlow constructs such as:

\n\n
h1 = tf.nn.relu(tf.matmul(l1, W1) + b1)\nh2 = ...\n
\n\n

is not actually executed when the Python is run. Instead, what's actually created is a dataflow graph that says to take particular inputs, apply particular operations, supply the results as the inputs to other operations, and so on. This model is executed by fast C++ code, and for the most part, the data going between operations is never copied back to the Python code.

\n\n

Then the programmer \"drives\" the execution of this model by pulling on nodes -- for training, usually in Python, and for serving, sometimes in Python and sometimes in raw C++:

\n\n
sess.run(eval_results)\n
\n\n

This one Python (or C++ function call) uses either an in-process call to C++ or an RPC for the distributed version to call into the C++ TensorFlow server to tell it to execute, and then copies back the results.

\n\n

So, with that said, let's re-phrase the question: Why did TensorFlow choose Python as the first well-supported language for expressing and controlling the training of models?

\n\n

The answer to that is simple: Python is probably the most comfortable language for a large range of data scientists and machine learning experts that's also that easy to integrate and have control a C++ backend, while also being general, widely-used both inside and outside of Google, and open source. Given that with the basic model of TensorFlow, the performance of Python isn't that important, it was a natural fit. It's also a huge plus that NumPy makes it easy to do pre-processing in Python -- also with high performance -- before feeding it in to TensorFlow for the truly CPU-heavy things.

\n\n

There's also a bunch of complexity in expressing the model that isn't used when executing it -- shape inference (e.g., if you do matmul(A, B), what is the shape of the resulting data?) and automatic gradient computation. It turns out to have been nice to be able to express those in Python, though I think in the long term they'll probably move to the C++ backend to make adding other languages easier.

\n\n

(The hope, of course, is to support other languages in the future for creating and expressing models. It's already quite straightforward to run inference using several other languages -- C++ works now, someone from Facebook contributed Go bindings that we're reviewing now, etc.)

\n", + "system": "" + }, + { + "instruction": "What does tf.nn.embedding_lookup function do?", + "input": "", + "output": "

Yes, this function is hard to understand, until you get the point.

\n\n

In its simplest form, it is similar to tf.gather. It returns the elements of params according to the indexes specified by ids.

\n\n

For example (assuming you are inside tf.InteractiveSession())

\n\n
params = tf.constant([10,20,30,40])\nids = tf.constant([0,1,2,3])\nprint tf.nn.embedding_lookup(params,ids).eval()\n
\n\n

would return [10 20 30 40], because the first element (index 0) of params is 10, the second element of params (index 1) is 20, etc.

\n\n

Similarly,

\n\n
params = tf.constant([10,20,30,40])\nids = tf.constant([1,1,3])\nprint tf.nn.embedding_lookup(params,ids).eval()\n
\n\n

would return [20 20 40].

\n\n

But embedding_lookup is more than that. The params argument can be a list of tensors, rather than a single tensor.

\n\n
params1 = tf.constant([1,2])\nparams2 = tf.constant([10,20])\nids = tf.constant([2,0,2,1,2,3])\nresult = tf.nn.embedding_lookup([params1, params2], ids)\n
\n\n

In such a case, the indexes, specified in ids, correspond to elements of tensors according to a partition strategy, where the default partition strategy is 'mod'.

\n\n

In the 'mod' strategy, index 0 corresponds to the first element of the first tensor in the list. Index 1 corresponds to the first element of the second tensor. Index 2 corresponds to the first element of the third tensor, and so on. Simply index i corresponds to the first element of the (i+1)th tensor , for all the indexes 0..(n-1), assuming params is a list of n tensors.

\n\n

Now, index n cannot correspond to tensor n+1, because the list params contains only n tensors. So index n corresponds to the second element of the first tensor. Similarly, index n+1 corresponds to the second element of the second tensor, etc.

\n\n

So, in the code

\n\n
params1 = tf.constant([1,2])\nparams2 = tf.constant([10,20])\nids = tf.constant([2,0,2,1,2,3])\nresult = tf.nn.embedding_lookup([params1, params2], ids)\n
\n\n

index 0 corresponds to the first element of the first tensor: 1

\n\n

index 1 corresponds to the first element of the second tensor: 10

\n\n

index 2 corresponds to the second element of the first tensor: 2

\n\n

index 3 corresponds to the second element of the second tensor: 20

\n\n

Thus, the result would be:

\n\n
[ 2  1  2 10  2 20]\n
\n", + "system": "" + }, + { + "instruction": "Understanding TensorBoard (weight) histograms", + "input": "", + "output": "

It appears that the network hasn't learned anything in the layers one to three. The last layer does change, so that means that there either may be something wrong with the gradients (if you're tampering with them manually), you're constraining learning to the last layer by optimizing only its weights or the last layer really 'eats up' all error. It could also be that only biases are learned. The network appears to learn something though, but it might not be using its full potential. More context would be needed here, but playing around with the learning rate (e.g. using a smaller one) might be worth a shot.

\n\n

In general, histograms display the number of occurrences of a value relative to each other values. Simply speaking, if the possible values are in a range of 0..9 and you see a spike of amount 10 on the value 0, this means that 10 inputs assume the value 0; in contrast, if the histogram shows a plateau of 1 for all values of 0..9, it means that for 10 inputs, each possible value 0..9 occurs exactly once.\nYou can also use histograms to visualize probability distributions when you normalize all histogram values by their total sum; if you do that, you'll intuitively obtain the likelihood with which a certain value (on the x axis) will appear (compared to other inputs).

\n\n

Now for layer1/weights, the plateau means that:

\n\n\n\n

Said differently, almost the same number of weights have the values -0.15, 0.0, 0.15 and everything in between. There are some weights having slightly smaller or higher values.\nSo in short, this simply looks like the weights have been initialized using a uniform distribution with zero mean and value range -0.15..0.15 ... give or take. If you do indeed use uniform initialization, then this is typical when the network has not been trained yet.

\n\n

In comparison, layer1/activations forms a bell curve (gaussian)-like shape: The values are centered around a specific value, in this case 0, but they may also be greater or smaller than that (equally likely so, since it's symmetric). Most values appear close around the mean of 0, but values do range from -0.8 to 0.8.\nI assume that the layer1/activations is taken as the distribution over all layer outputs in a batch. You can see that the values do change over time.

\n\n

The layer 4 histogram doesn't tell me anything specific. From the shape, it's just showing that some weight values around -0.1, 0.05 and 0.25 tend to be occur with a higher probability; a reason could be, that different parts of each neuron there actually pick up the same information and are basically redundant. This can mean that you could actually use a smaller network or that your network has the potential to learn more distinguishing features in order to prevent overfitting. These are just assumptions though.

\n\n

Also, as already stated in the comments below, do add bias units. By leaving them out, you are forcefully constraining your network to a possibly invalid solution.

\n", + "system": "" + }, + { + "instruction": "Can I run Keras model on gpu?", + "input": "", + "output": "

Yes you can run keras models on GPU. Few things you will have to check first.

\n
    \n
  1. your system has GPU (Nvidia. As AMD doesn't work yet)
  2. \n
  3. You have installed the GPU version of tensorflow
  4. \n
  5. You have installed CUDA installation instructions
  6. \n
  7. Verify that tensorflow is running with GPU check if GPU is working
  8. \n
\n

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

\n

for TF > v2.0

\n

sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True))

\n

(Thanks @nbro and @Ferro for pointing this out in the comments)

\n

OR

\n
from tensorflow.python.client import device_lib\nprint(device_lib.list_local_devices())\n
\n

output will be something like this:

\n
[\n  name: "/cpu:0"device_type: "CPU",\n  name: "/gpu:0"device_type: "GPU"\n]\n
\n

Once all this is done your model will run on GPU:

\n

To Check if keras(>=2.1.1) is using GPU:

\n
from keras import backend as K\nK.tensorflow_backend._get_available_gpus()\n
\n

All the best.

\n", + "system": "" + }, + { + "instruction": "How does tf.app.run() work?", + "input": "", + "output": "
if __name__ == \"__main__\":\n
\n\n

means current file is executed under a shell instead of imported as a module.

\n\n
tf.app.run()\n
\n\n

As you can see through the file app.py

\n\n
def run(main=None, argv=None):\n  \"\"\"Runs the program with an optional 'main' function and 'argv' list.\"\"\"\n  f = flags.FLAGS\n\n  # Extract the args from the optional `argv` list.\n  args = argv[1:] if argv else None\n\n  # Parse the known flags from that list, or from the command\n  # line otherwise.\n  # pylint: disable=protected-access\n  flags_passthrough = f._parse_flags(args=args)\n  # pylint: enable=protected-access\n\n  main = main or sys.modules['__main__'].main\n\n  # Call the main function, passing through any arguments\n  # to the final program.\n  sys.exit(main(sys.argv[:1] + flags_passthrough))\n
\n\n

Let's break line by line:

\n\n
flags_passthrough = f._parse_flags(args=args)\n
\n\n

This ensures that the argument you pass through command line is valid,e.g. \npython my_model.py --data_dir='...' --max_iteration=10000 Actually, this feature is implemented based on python standard argparse module.

\n\n
main = main or sys.modules['__main__'].main\n
\n\n

The first main in right side of = is the first argument of current function run(main=None, argv=None)\n. While sys.modules['__main__'] means current running file(e.g. my_model.py).

\n\n

So there are two cases:

\n\n
    \n
  1. You don't have a main function in my_model.py Then you have to\ncall tf.app.run(my_main_running_function)

  2. \n
  3. you have a main function in my_model.py. (This is mostly the case.)

  4. \n
\n\n

Last line:

\n\n
sys.exit(main(sys.argv[:1] + flags_passthrough))\n
\n\n

ensures your main(argv) or my_main_running_function(argv) function is called with parsed arguments properly.

\n", + "system": "" + }, + { + "instruction": "Tensorflow - ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float)", + "input": "", + "output": "

TL;DR Several possible errors, most fixed with x = np.asarray(x).astype('float32').

\n

Others may be faulty data preprocessing; ensure everything is properly formatted (categoricals, nans, strings, etc). Below shows what the model expects:

\n
[print(i.shape, i.dtype) for i in model.inputs]\n[print(o.shape, o.dtype) for o in model.outputs]\n[print(l.name, l.input_shape, l.dtype) for l in model.layers]\n
\n
\n

The problem's rooted in using lists as inputs, as opposed to Numpy arrays; Keras/TF doesn't support former. A simple conversion is: x_array = np.asarray(x_list).

\n

The next step's to ensure data is fed in expected format; for LSTM, that'd be a 3D tensor with dimensions (batch_size, timesteps, features) - or equivalently, (num_samples, timesteps, channels). Lastly, as a debug pro-tip, print ALL the shapes for your data. Code accomplishing all of the above, below:

\n
Sequences = np.asarray(Sequences)\nTargets   = np.asarray(Targets)\nshow_shapes()\n\nSequences = np.expand_dims(Sequences, -1)\nTargets   = np.expand_dims(Targets, -1)\nshow_shapes()\n
\n
# OUTPUTS\nExpected: (num_samples, timesteps, channels)\nSequences: (200, 1000)\nTargets:   (200,)\n\nExpected: (num_samples, timesteps, channels)\nSequences: (200, 1000, 1)\nTargets:   (200, 1)\n
\n
\n

As a bonus tip, I notice you're running via main(), so your IDE probably lacks a Jupyter-like cell-based execution; I strongly recommend the Spyder IDE. It's as simple as adding # In[], and pressing Ctrl + Enter below:

\n\n
\n

Function used:

\n
def show_shapes(): # can make yours to take inputs; this'll use local variable values\n    print("Expected: (num_samples, timesteps, channels)")\n    print("Sequences: {}".format(Sequences.shape))\n    print("Targets:   {}".format(Targets.shape))   \n
\n", + "system": "" + }, + { + "instruction": "Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation", + "input": "", + "output": "

Tensorflow 2.1+

\n

What's going on?

\n

With the new Tensorflow 2.1 release, the default tensorflow pip package contains both CPU and GPU versions of TF. In previous TF versions, not finding the CUDA libraries would emit an error and raise an exception, while now the library dynamically searches for the correct CUDA version and, if it doesn't find it, emits the warning (The W in the beginning stands for warnings, errors have an E (or F for fatal errors) and falls back to CPU-only mode. In fact, this is also written in the log as an info message right after the warning (do note that if you have a higher minimum log level that the default, you might not see info messages). The full log is (emphasis mine):

\n
\n

2020-01-20 12:27:44.554767: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found

\n

2020-01-20 12:27:44.554964: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

\n
\n

Should I worry? How do I fix it?

\n

If you don't have a CUDA-enabled GPU on your machine, or if you don't care about not having GPU acceleration, no need to worry. If, on the other hand, you installed tensorflow and wanted GPU acceleration, check your CUDA installation (TF 2.1 requires CUDA 10.1, not 10.2 or 10.0).

\n

If you just want to get rid of the warning, you can adapt TF's logging level to suppress warnings, but that might be overkill, as it will silence all warnings.

\n
\n

Tensorflow 1.X or 2.0:

\n

Your CUDA setup is broken, ensure you have the correct version installed.

\n", + "system": "" + }, + { + "instruction": "Deep-Learning Nan loss reasons", + "input": "", + "output": "

There are lots of things I have seen make a model diverge.

\n
    \n
  1. Too high of a learning rate. You can often tell if this is the case if the loss begins to increase and then diverges to infinity.

    \n
  2. \n
  3. I am not to familiar with the DNNClassifier but I am guessing it uses the categorical cross entropy cost function. This involves taking the log of the prediction which diverges as the prediction approaches zero. That is why people usually add a small epsilon value to the prediction to prevent this divergence. I am guessing the DNNClassifier probably does this or uses the tensorflow opp for it. Probably not the issue.

    \n
  4. \n
  5. Other numerical stability issues can exist such as division by zero where adding the epsilon can help. Another less obvious one if the square root whose derivative can diverge if not properly simplified when dealing with finite precision numbers. Yet again I doubt this is the issue in the case of the DNNClassifier.

    \n
  6. \n
  7. You may have an issue with the input data. Try calling assert not np.any(np.isnan(x)) on the input data to make sure you are not introducing the nan. Also make sure all of the target values are valid. Finally, make sure the data is properly normalized. You probably want to have the pixels in the range [-1, 1] and not [0, 255].

    \n
  8. \n
  9. The labels must be in the domain of the loss function, so if using a logarithmic-based loss function all labels must be non-negative (as noted by evan pu and the comments below).

    \n
  10. \n
\n", + "system": "" + }, + { + "instruction": "What does this tensorflow message mean? Any side effect? Was the installation successful?", + "input": "", + "output": "

An important part of Tensorflow is that it is supposed to be fast. With a suitable installation, it works with CPUs, GPUs, or TPUs. Part of going fast means that it uses different code depending on your hardware. Some CPUs support operations that other CPUs do not, such as vectorized addition (adding multiple variables at once). Tensorflow is simply telling you that the version you have installed can use the AVX and AVX2 operations and is set to do so by default in certain situations (say inside a forward or back-prop matrix multiply), which can speed things up. This is not an error, it is just telling you that it can and will take advantage of your CPU to get that extra speed out.

\n

Note: AVX stands for Advanced Vector Extensions.

\n", + "system": "" + }, + { + "instruction": "What does tf.nn.conv2d do in tensorflow?", + "input": "", + "output": "

Ok I think this is about the simplest way to explain it all.

\n\n
\n\n

Your example is 1 image, size 2x2, with 1 channel. You have 1 filter, with size 1x1, and 1 channel (size is height x width x channels x number of filters).

\n\n

For this simple case the resulting 2x2, 1 channel image (size 1x2x2x1, number of images x height x width x x channels) is the result of multiplying the filter value by each pixel of the image.

\n\n
\n\n

Now let's try more channels:

\n\n
input = tf.Variable(tf.random_normal([1,3,3,5]))\nfilter = tf.Variable(tf.random_normal([1,1,5,1]))\n\nop = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='VALID')\n
\n\n

Here the 3x3 image and the 1x1 filter each have 5 channels. The resulting image will be 3x3 with 1 channel (size 1x3x3x1), where the value of each pixel is the dot product across channels of the filter with the corresponding pixel in the input image.

\n\n
\n\n

Now with a 3x3 filter

\n\n
input = tf.Variable(tf.random_normal([1,3,3,5]))\nfilter = tf.Variable(tf.random_normal([3,3,5,1]))\n\nop = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='VALID')\n
\n\n

Here we get a 1x1 image, with 1 channel (size 1x1x1x1). The value is the sum of the 9, 5-element dot products. But you could just call this a 45-element dot product.

\n\n
\n\n

Now with a bigger image

\n\n
input = tf.Variable(tf.random_normal([1,5,5,5]))\nfilter = tf.Variable(tf.random_normal([3,3,5,1]))\n\nop = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='VALID')\n
\n\n

The output is a 3x3 1-channel image (size 1x3x3x1). \nEach of these values is a sum of 9, 5-element dot products.

\n\n

Each output is made by centering the filter on one of the 9 center pixels of the input image, so that none of the filter sticks out. The xs below represent the filter centers for each output pixel.

\n\n
.....\n.xxx.\n.xxx.\n.xxx.\n.....\n
\n\n
\n\n

Now with \"SAME\" padding:

\n\n
input = tf.Variable(tf.random_normal([1,5,5,5]))\nfilter = tf.Variable(tf.random_normal([3,3,5,1]))\n\nop = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='SAME')\n
\n\n

This gives a 5x5 output image (size 1x5x5x1). This is done by centering the filter at each position on the image.

\n\n

Any of the 5-element dot products where the filter sticks out past the edge of the image get a value of zero.

\n\n

So the corners are only sums of 4, 5-element dot products.

\n\n
\n\n

Now with multiple filters.

\n\n
input = tf.Variable(tf.random_normal([1,5,5,5]))\nfilter = tf.Variable(tf.random_normal([3,3,5,7]))\n\nop = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='SAME')\n
\n\n

This still gives a 5x5 output image, but with 7 channels (size 1x5x5x7). Where each channel is produced by one of the filters in the set.

\n\n
\n\n

Now with strides 2,2:

\n\n
input = tf.Variable(tf.random_normal([1,5,5,5]))\nfilter = tf.Variable(tf.random_normal([3,3,5,7]))\n\nop = tf.nn.conv2d(input, filter, strides=[1, 2, 2, 1], padding='SAME')\n
\n\n

Now the result still has 7 channels, but is only 3x3 (size 1x3x3x7).

\n\n

This is because instead of centering the filters at every point on the image, the filters are centered at every other point on the image, taking steps (strides) of width 2. The x's below represent the filter center for each output pixel, on the input image.

\n\n
x.x.x\n.....\nx.x.x\n.....\nx.x.x\n
\n\n
\n\n

And of course the first dimension of the input is the number of images so you can apply it over a batch of 10 images, for example:

\n\n
input = tf.Variable(tf.random_normal([10,5,5,5]))\nfilter = tf.Variable(tf.random_normal([3,3,5,7]))\n\nop = tf.nn.conv2d(input, filter, strides=[1, 2, 2, 1], padding='SAME')\n
\n\n

This performs the same operation, for each image independently, giving a stack of 10 images as the result (size 10x3x3x7)

\n", + "system": "" + }, + { + "instruction": "Loading a trained Keras model and continue training", + "input": "", + "output": "

Actually - model.save saves all information need for restarting training in your case. The only thing which could be spoiled by reloading model is your optimizer state. To check that - try to save and reload model and train it on training data.

\n", + "system": "" + }, + { + "instruction": "How are the new tf.contrib.summary summaries in TensorFlow evaluated?", + "input": "", + "output": "

answer moved from edit to self-answer as requested

\n
\n

I just played around with this a little bit, and it seems that if one combines tf.control_dependencies with tf.record_summaries_every_n_global_steps it behaves as expected and the summary only gets recorded every nth step. But if they are run together within a session, such as session.run([train, summs]), the summaries are stored every once in a while, but not exactly every nth step. I tested this with n=2 and with the second approach the summary was often written at odd steps, while with the control dependency approach it was always on an even step.

\n", + "system": "" + }, + { + "instruction": "In Tensorflow, get the names of all the Tensors in a graph", + "input": "", + "output": "

You can do

\n\n
[n.name for n in tf.get_default_graph().as_graph_def().node]\n
\n\n

Also, if you are prototyping in an IPython notebook, you can show the graph directly in notebook, see show_graph function in Alexander's Deep Dream notebook

\n", + "system": "" + }, + { + "instruction": "Should we do learning rate decay for adam optimizer", + "input": "", + "output": "

It depends. ADAM updates any parameter with an individual learning rate. This means that every parameter in the network has a specific learning rate associated.

\n

But the single learning rate for each parameter is computed using lambda (the initial learning rate) as an upper limit. This means that every single learning rate can vary from 0 (no update) to lambda (maximum update).

\n

It's true, that the learning rates adapt themselves during training steps, but if you want to be sure that every update step doesn't exceed lambda you can than lower lambda using exponential decay or whatever.\nIt can help to reduce loss during the latest step of training, when the computed loss with the previously associated lambda parameter has stopped to decrease.

\n", + "system": "" + }, + { + "instruction": "TensorFlow, why there are 3 files after saving the model?", + "input": "", + "output": "

Try this:

\n\n
with tf.Session() as sess:\n    saver = tf.train.import_meta_graph('/tmp/model.ckpt.meta')\n    saver.restore(sess, \"/tmp/model.ckpt\")\n
\n\n

The TensorFlow save method saves three kinds of files because it stores the graph structure separately from the variable values. The .meta file describes the saved graph structure, so you need to import it before restoring the checkpoint (otherwise it doesn't know what variables the saved checkpoint values correspond to).

\n\n

Alternatively, you could do this:

\n\n
# Recreate the EXACT SAME variables\nv1 = tf.Variable(..., name=\"v1\")\nv2 = tf.Variable(..., name=\"v2\")\n\n...\n\n# Now load the checkpoint variable values\nwith tf.Session() as sess:\n    saver = tf.train.Saver()\n    saver.restore(sess, \"/tmp/model.ckpt\")\n
\n\n

Even though there is no file named model.ckpt, you still refer to the saved checkpoint by that name when restoring it. From the saver.py source code:

\n\n
\n

Users only need to interact with the user-specified prefix... instead\n of any physical pathname.

\n
\n", + "system": "" + }, + { + "instruction": "Difference between Variable and get_variable in TensorFlow", + "input": "", + "output": "

I'd recommend to always use tf.get_variable(...) -- it will make it way easier to refactor your code if you need to share variables at any time, e.g. in a multi-gpu setting (see the multi-gpu CIFAR example). There is no downside to it.

\n\n

Pure tf.Variable is lower-level; at some point tf.get_variable() did not exist so some code still uses the low-level way.

\n", + "system": "" + }, + { + "instruction": "TensorFlow, "'module' object has no attribute 'placeholder'"", + "input": "", + "output": "

If you have this error after an upgrade to TensorFlow 2.0, you can still use 1.X API by replacing:

\n\n
import tensorflow as tf\n
\n\n

by

\n\n
import tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\n
\n", + "system": "" + }, + { + "instruction": "Meaning of buffer_size in Dataset.map , Dataset.prefetch and Dataset.shuffle", + "input": "", + "output": "

TL;DR Despite their similar names, these arguments have quite difference meanings. The buffer_size in Dataset.shuffle() can affect the randomness of your dataset, and hence the order in which elements are produced. The buffer_size in Dataset.prefetch() only affects the time it takes to produce the next element.

\n\n
\n\n

The buffer_size argument in tf.data.Dataset.prefetch() and the output_buffer_size argument in tf.contrib.data.Dataset.map() provide a way to tune the performance of your input pipeline: both arguments tell TensorFlow to create a buffer of at most buffer_size elements, and a background thread to fill that buffer in the background. \n(Note that we removed the output_buffer_size argument from Dataset.map() when it moved from tf.contrib.data to tf.data. New code should use Dataset.prefetch() after map() to get the same behavior.)

\n\n

Adding a prefetch buffer can improve performance by overlapping the preprocessing of data with downstream computation. Typically it is most useful to add a small prefetch buffer (with perhaps just a single element) at the very end of the pipeline, but more complex pipelines can benefit from additional prefetching, especially when the time to produce a single element can vary.

\n\n

By contrast, the buffer_size argument to tf.data.Dataset.shuffle() affects the randomness of the transformation. We designed the Dataset.shuffle() transformation (like the tf.train.shuffle_batch() function that it replaces) to handle datasets that are too large to fit in memory. Instead of shuffling the entire dataset, it maintains a buffer of buffer_size elements, and randomly selects the next element from that buffer (replacing it with the next input element, if one is available). Changing the value of buffer_size affects how uniform the shuffling is: if buffer_size is greater than the number of elements in the dataset, you get a uniform shuffle; if it is 1 then you get no shuffling at all. For very large datasets, a typical \"good enough\" approach is to randomly shard the data into multiple files once before training, then shuffle the filenames uniformly, and then use a smaller shuffle buffer. However, the appropriate choice will depend on the exact nature of your training job.

\n\n
\n", + "system": "" + }, + { + "instruction": "Keras split train test set when using ImageDataGenerator", + "input": "", + "output": "

Keras has now added Train / validation split from a single directory using ImageDataGenerator:

\n\n
train_datagen = ImageDataGenerator(rescale=1./255,\n    shear_range=0.2,\n    zoom_range=0.2,\n    horizontal_flip=True,\n    validation_split=0.2) # set validation split\n\ntrain_generator = train_datagen.flow_from_directory(\n    train_data_dir,\n    target_size=(img_height, img_width),\n    batch_size=batch_size,\n    class_mode='binary',\n    subset='training') # set as training data\n\nvalidation_generator = train_datagen.flow_from_directory(\n    train_data_dir, # same directory as training data\n    target_size=(img_height, img_width),\n    batch_size=batch_size,\n    class_mode='binary',\n    subset='validation') # set as validation data\n\nmodel.fit_generator(\n    train_generator,\n    steps_per_epoch = train_generator.samples // batch_size,\n    validation_data = validation_generator, \n    validation_steps = validation_generator.samples // batch_size,\n    epochs = nb_epochs)\n
\n\n

https://keras.io/preprocessing/image/

\n", + "system": "" + }, + { + "instruction": "What's the purpose of tf.app.flags in TensorFlow?", + "input": "", + "output": "

The tf.app.flags module is presently a thin wrapper around python-gflags, so the documentation for that project is the best resource for how to use it argparse, which implements a subset of the functionality in python-gflags.

\n\n

Note that this module is currently packaged as a convenience for writing demo apps, and is not technically part of the public API, so it may change in future.

\n\n

We recommend that you implement your own flag parsing using argparse or whatever library you prefer.

\n\n

EDIT: The tf.app.flags module is not in fact implemented using python-gflags, but it uses a similar API.

\n", + "system": "" + }, + { + "instruction": "Tensorflow Strides Argument", + "input": "", + "output": "

The pooling and convolutional ops slide a \"window\" across the input tensor. Using tf.nn.conv2d as an example: If the input tensor has 4 dimensions: [batch, height, width, channels], then the convolution operates on a 2D window on the height, width dimensions.

\n\n

strides determines how much the window shifts by in each of the dimensions. The typical use sets the first (the batch) and last (the depth) stride to 1.

\n\n

Let's use a very concrete example: Running a 2-d convolution over a 32x32 greyscale input image. I say greyscale because then the input image has depth=1, which helps keep it simple. Let that image look like this:

\n\n
00 01 02 03 04 ...\n10 11 12 13 14 ...\n20 21 22 23 24 ...\n30 31 32 33 34 ...\n...\n
\n\n

Let's run a 2x2 convolution window over a single example (batch size = 1). We'll give the convolution an output channel depth of 8.

\n\n

The input to the convolution has shape=[1, 32, 32, 1].

\n\n

If you specify strides=[1,1,1,1] with padding=SAME, then the output of the filter will be [1, 32, 32, 8].

\n\n

The filter will first create an output for:

\n\n
F(00 01\n  10 11)\n
\n\n

And then for:

\n\n
F(01 02\n  11 12)\n
\n\n

and so on. Then it will move to the second row, calculating:

\n\n
F(10, 11\n  20, 21)\n
\n\n

then

\n\n
F(11, 12\n  21, 22)\n
\n\n

If you specify a stride of [1, 2, 2, 1] it won't do overlapping windows. It will compute:

\n\n
F(00, 01\n  10, 11)\n
\n\n

and then

\n\n
F(02, 03\n  12, 13)\n
\n\n

The stride operates similarly for the pooling operators.

\n\n

Question 2: Why strides [1, x, y, 1] for convnets

\n\n

The first 1 is the batch: You don't usually want to skip over examples in your batch, or you shouldn't have included them in the first place. :)

\n\n

The last 1 is the depth of the convolution: You don't usually want to skip inputs, for the same reason.

\n\n

The conv2d operator is more general, so you could create convolutions that slide the window along other dimensions, but that's not a typical use in convnets. The typical use is to use them spatially.

\n\n

Why reshape to -1 -1 is a placeholder that says \"adjust as necessary to match the size needed for the full tensor.\" It's a way of making the code be independent of the input batch size, so that you can change your pipeline and not have to adjust the batch size everywhere in the code.

\n", + "system": "" + }, + { + "instruction": "What's the difference between sparse_softmax_cross_entropy_with_logits and softmax_cross_entropy_with_logits?", + "input": "", + "output": "

Having two different functions is a convenience, as they produce the same result.

\n\n

The difference is simple:

\n\n\n\n

Labels used in softmax_cross_entropy_with_logits are the one hot version of labels used in sparse_softmax_cross_entropy_with_logits.

\n\n

Another tiny difference is that with sparse_softmax_cross_entropy_with_logits, you can give -1 as a label to have loss 0 on this label.

\n", + "system": "" + }, + { + "instruction": "Will scikit-learn utilize GPU?", + "input": "", + "output": "

Tensorflow only uses GPU if it is built against Cuda and CuDNN. By default it does not use GPU, especially if it is running inside Docker, unless you use nvidia-docker and an image with a built-in support.

\n

Scikit-learn is not intended to be used as a deep-learning framework and it does not provide any GPU support.

\n
\n

Why is there no support for deep or reinforcement learning / Will there be support for deep or reinforcement learning in scikit-learn?

\n

Deep learning and reinforcement learning both require a rich\nvocabulary to define an architecture, with deep learning additionally\nrequiring GPUs for efficient computing. However, neither of these fit\nwithin the design constraints of scikit-learn; as a result, deep\nlearning and reinforcement learning are currently out of scope for\nwhat scikit-learn seeks to achieve.

\n
\n

Extracted from http://scikit-learn.org/stable/faq.html#why-is-there-no-support-for-deep-or-reinforcement-learning-will-there-be-support-for-deep-or-reinforcement-learning-in-scikit-learn

\n
\n

Will you add GPU support in scikit-learn?

\n

No, or at least not in the near future. The main reason is that GPU\nsupport will introduce many software dependencies and introduce\nplatform specific issues. scikit-learn is designed to be easy to\ninstall on a wide variety of platforms. Outside of neural networks,\nGPUs don\u2019t play a large role in machine learning today, and much\nlarger gains in speed can often be achieved by a careful choice of\nalgorithms.

\n
\n

Extracted from http://scikit-learn.org/stable/faq.html#will-you-add-gpu-support

\n", + "system": "" + }, + { + "instruction": "Can Keras with Tensorflow backend be forced to use CPU or GPU at will?", + "input": "", + "output": "

If you want to force Keras to use CPU

\n\n

Way 1

\n\n
import os\nos.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"   # see issue #152\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"\"\n
\n\n

before Keras / Tensorflow is imported.

\n\n

Way 2

\n\n

Run your script as

\n\n
$ CUDA_VISIBLE_DEVICES=\"\" ./your_keras_code.py\n
\n\n

See also

\n\n
    \n
  1. https://github.com/keras-team/keras/issues/152
  2. \n
  3. https://github.com/fchollet/keras/issues/4613
  4. \n
\n", + "system": "" + }, + { + "instruction": "How to apply gradient clipping in TensorFlow?", + "input": "", + "output": "

Gradient clipping needs to happen after computing the gradients, but before applying them to update the model's parameters. In your example, both of those things are handled by the AdamOptimizer.minimize() method.

\n

In order to clip your gradients you'll need to explicitly compute, clip, and apply them as described in this section in TensorFlow's API documentation. Specifically you'll need to substitute the call to the minimize() method with something like the following:

\n
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)\ngvs = optimizer.compute_gradients(cost)\ncapped_gvs = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gvs]\ntrain_op = optimizer.apply_gradients(capped_gvs)\n
\n", + "system": "" + }, + { + "instruction": "ImportError: No module named tensorflow", + "input": "", + "output": "

Try installing tensorflow again with the whatever version you want and with option --ignore-installed like:

\n\n
pip install tensorflow==1.2.0 --ignore-installed\n
\n\n

I solved same issue using this command.

\n", + "system": "" + }, + { + "instruction": "Does model.compile() initialize all the weights and biases in Keras (tensorflow backend)?", + "input": "", + "output": "

When to use?

\n\n

If you're using compile, surely it must be after load_model(). After all, you need a model to compile. (PS: load_model automatically compiles the model with the optimizer that was saved along with the model)

\n\n

What does compile do?

\n\n

Compile defines the loss function, the optimizer and the metrics. That's all.

\n\n

It has nothing to do with the weights and you can compile a model as many times as you want without causing any problem to pretrained weights.

\n\n

You need a compiled model to train (because training uses the loss function and the optimizer). But it's not necessary to compile a model for predicting.

\n\n

Do you need to use compile more than once?

\n\n

Only if:

\n\n\n\n

Consequences of compiling again:

\n\n

If you compile a model again, you will lose the optimizer states.

\n\n

This means that your training will suffer a little at the beginning until it adjusts the learning rate, the momentums, etc. But there is absolutely no damage to the weights (unless, of course, your initial learning rate is so big that the first training step wildly changes the fine tuned weights).

\n", + "system": "" + }, + { + "instruction": "What is the difference between sparse_categorical_crossentropy and categorical_crossentropy?", + "input": "", + "output": "

Simply:

\n\n

Consider a classification problem with 5 categories (or classes).

\n\n

Consider now a classification problem with 3 classes.

\n\n

Many categorical models produce scce output because you save space, but lose A LOT of information (for example, in the 2nd example, index 2 was also very close.) I generally prefer cce output for model reliability.

\n

There are a number of situations to use scce, including:

\n\n

220405: response to "one-hot encoding" comments:

\n

one-hot encoding is used for a category feature INPUT to select a specific category (e.g. male versus female). This encoding allows the model to train more efficiently: training weight is a product of category, which is 0 for all categories except for the given one.

\n

cce and scce are a model OUTPUT. cce is a probability array of each category, totally 1.0. scce shows the MOST LIKELY category, totally 1.0.

\n

scce is technically a one-hot array, just like a hammer used as a door stop is still a hammer, but its purpose is different. cce is NOT one-hot.

\n", + "system": "" + }, + { + "instruction": "Can I use TensorBoard with Google Colab?", + "input": "", + "output": "

EDIT: You probably want to give the official %tensorboard magic a go, available from TensorFlow 1.13 onward.

\n
\n\n

Prior to the existence of the %tensorboard magic, the standard way to\nachieve this was to proxy network traffic to the Colab VM using\nngrok. A Colab example can be found here.

\n

These are the steps (the code snippets represent cells of type "code" in colab):

\n
    \n
  1. Get TensorBoard running in the background.
    \nInspired by this answer.

    \n
    LOG_DIR = '/tmp/log'\nget_ipython().system_raw(\n    'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'\n    .format(LOG_DIR)\n)\n
    \n
  2. \n
  3. Download and unzip ngrok.
    \nReplace the link passed to wget with the correct download link for your OS.

    \n
    ! wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip\n! unzip ngrok-stable-linux-amd64.zip\n
    \n
  4. \n
  5. Launch ngrok background process...

    \n
    get_ipython().system_raw('./ngrok http 6006 &')\n
    \n
  6. \n
\n

...and retrieve public url.\nSource

\n
    ! curl -s http://localhost:4040/api/tunnels | python3 -c \\\n        "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"\n
\n", + "system": "" + }, + { + "instruction": "How to set adaptive learning rate for GradientDescentOptimizer?", + "input": "", + "output": "

First of all, tf.train.GradientDescentOptimizer is designed to use a constant learning rate for all variables in all steps. TensorFlow also provides out-of-the-box adaptive optimizers including the tf.train.AdagradOptimizer and the tf.train.AdamOptimizer, and these can be used as drop-in replacements.

\n\n

However, if you want to control the learning rate with otherwise-vanilla gradient descent, you can take advantage of the fact that the learning_rate argument to the tf.train.GradientDescentOptimizer constructor can be a Tensor object. This allows you to compute a different value for the learning rate in each step, for example:

\n\n
learning_rate = tf.placeholder(tf.float32, shape=[])\n# ...\ntrain_step = tf.train.GradientDescentOptimizer(\n    learning_rate=learning_rate).minimize(mse)\n\nsess = tf.Session()\n\n# Feed different values for learning rate to each training step.\nsess.run(train_step, feed_dict={learning_rate: 0.1})\nsess.run(train_step, feed_dict={learning_rate: 0.1})\nsess.run(train_step, feed_dict={learning_rate: 0.01})\nsess.run(train_step, feed_dict={learning_rate: 0.01})\n
\n\n

Alternatively, you could create a scalar tf.Variable that holds the learning rate, and assign it each time you want to change the learning rate.

\n", + "system": "" + }, + { + "instruction": "TensorFlow saving into/loading a graph from a file", + "input": "", + "output": "

There are many ways to approach the problem of saving a model in TensorFlow, which can make it a bit confusing. Taking each of your sub-questions in turn:

\n\n
    \n
  1. The checkpoint files (produced e.g. by calling saver.save() on a tf.train.Saver object) contain only the weights, and any other variables defined in the same program. To use them in another program, you must re-create the associated graph structure (e.g. by running code to build it again, or calling tf.import_graph_def()), which tells TensorFlow what to do with those weights. Note that calling saver.save() also produces a file containing a MetaGraphDef, which contains a graph and details of how to associate the weights from a checkpoint with that graph. See the tutorial for more details.

  2. \n
  3. tf.train.write_graph() only writes the graph structure; not the weights.

  4. \n
  5. Bazel is unrelated to reading or writing TensorFlow graphs. (Perhaps I misunderstand your question: feel free to clarify it in a comment.)

  6. \n
  7. A frozen graph can be loaded using tf.import_graph_def(). In this case, the weights are (typically) embedded in the graph, so you don't need to load a separate checkpoint.

  8. \n
  9. The main change would be to update the names of the tensor(s) that are fed into the model, and the names of the tensor(s) that are fetched from the model. In the TensorFlow Android demo, this would correspond to the inputName and outputName strings that are passed to TensorFlowClassifier.initializeTensorFlow().

  10. \n
  11. The GraphDef is the program structure, which typically does not change through the training process. The checkpoint is a snapshot of the state of a training process, which typically changes at every step of the training process. As a result, TensorFlow uses different storage formats for these types of data, and the low-level API provides different ways to save and load them. Higher-level libraries, such as the MetaGraphDef libraries, Keras, and skflow build on these mechanisms to provide more convenient ways to save and restore an entire model.

  12. \n
\n", + "system": "" + }, + { + "instruction": "What is the difference between np.mean and tf.reduce_mean?", + "input": "", + "output": "

The functionality of numpy.mean and tensorflow.reduce_mean are the same. They do the same thing. From the documentation, for numpy and tensorflow, you can see that. Lets look at an example,

\n
c = np.array([[3.,4], [5.,6], [6.,7]])\nprint(np.mean(c,1))\n\nMean = tf.reduce_mean(c,1)\nwith tf.Session() as sess:\n    result = sess.run(Mean)\n    print(result)\n
\n

Output

\n
[ 3.5  5.5  6.5]\n[ 3.5  5.5  6.5]\n
\n

Here you can see that when axis(numpy) or reduction_indices(tensorflow) is 1, it computes mean across (3,4) and (5,6) and (6,7), so 1 defines across which axis the mean is computed. When it is 0, the mean is computed across(3,5,6) and (4,6,7), and so on. I hope you get the idea.

\n

Now what are the differences between them?

\n

You can compute the numpy operation anywhere on python. But in order to do a tensorflow operation, it must be done inside a tensorflow Session. You can read more about it here. So when you need to perform any computation for your tensorflow graph(or structure if you will), it must be done inside a tensorflow Session.

\n

Lets look at another example.

\n
npMean = np.mean(c)\nprint(npMean+1)\n\ntfMean = tf.reduce_mean(c)\nAdd = tfMean + 1\nwith tf.Session() as sess:\n    result = sess.run(Add)\n    print(result)\n
\n

We could increase mean by 1 in numpy as you would naturally, but in order to do it in tensorflow, you need to perform that in Session, without using Session you can't do that. In other words, when you are computing tfMean = tf.reduce_mean(c), tensorflow doesn't compute it then. It only computes that in a Session. But numpy computes that instantly, when you write np.mean().

\n

I hope it makes sense.

\n", + "system": "" + }, + { + "instruction": "What does global_step mean in Tensorflow?", + "input": "", + "output": "

global_step refers to the number of batches seen by the graph. Every time a batch is provided, the weights are updated in the direction that minimizes the loss. global_step just keeps track of the number of batches seen so far. When it is passed in the minimize() argument list, the variable is increased by one. Have a look at optimizer.minimize().

\n\n

You can get the global_step value using tf.train.global_step().\nAlso handy are the utility methods tf.train.get_global_step or tf.train.get_or_create_global_step.

\n\n

0 is the initial value of the global step in this context.

\n", + "system": "" + }, + { + "instruction": "AttributeError: 'Tensor' object has no attribute 'numpy'", + "input": "", + "output": "

Since the accepted answer did not solve the problem for me so I thought it might be helpful for some people who face the problem and that already have tensorflow version >= 2.2.0 and eager execution enabled.

\n

The issue seems to be that for certain functions during the fitting model.fit()\nthe @tf.function decorator prohibits the execution of functions like tensor.numpy() for performance reasons.

\n

The solution for me was to pass the flag run_eagerly=True to the model.compile() like this:

\n
model.compile(..., run_eagerly=True)\n
\n", + "system": "" + }, + { + "instruction": "How to get stable results with TensorFlow, setting random seed", + "input": "", + "output": "

Setting the current TensorFlow random seed affects the current default graph only. Since you are creating a new graph for your training and setting it as default (with g.as_default():), you must set the random seed within the scope of that with block.

\n\n

For example, your loop should look like the following:

\n\n
for i in range(3):\n  g = tf.Graph()\n  with g.as_default():\n    tf.set_random_seed(1)\n    accuracy_result, average_error = network.train_network(\n        parameters, inputHeight, inputWidth, inputChannels, outputClasses)\n
\n\n

Note that this will use the same random seed for each iteration of the outer for loop. If you want to use a different—but still deterministic—seed in each iteration, you can use tf.set_random_seed(i + 1).

\n", + "system": "" + }, + { + "instruction": "Tensorflow set CUDA_VISIBLE_DEVICES within jupyter", + "input": "", + "output": "

You can set environment variables in the notebook using os.environ. Do the following before initializing TensorFlow to limit TensorFlow to first GPU.

\n\n
import os\nos.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\"   # see issue #152\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"\n
\n\n

You can double check that you have the correct devices visible to TF

\n\n
from tensorflow.python.client import device_lib\nprint device_lib.list_local_devices()\n
\n\n

I tend to use it from utility module like notebook_util

\n\n
import notebook_util\nnotebook_util.pick_gpu_lowest_memory()\nimport tensorflow as tf\n
\n", + "system": "" + }, + { + "instruction": "How to get Tensorflow tensor dimensions (shape) as int values?", + "input": "", + "output": "

To get the shape as a list of ints, do tensor.get_shape().as_list().

\n\n

To complete your tf.shape() call, try tensor2 = tf.reshape(tensor, tf.TensorShape([num_rows*num_cols, 1])). Or you can directly do tensor2 = tf.reshape(tensor, tf.TensorShape([-1, 1])) where its first dimension can be inferred.

\n", + "system": "" + }, + { + "instruction": "How to solve "AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key"?", + "input": "", + "output": "

The protoc version I got through pip show protobuf and protoc --version were different. The version in pip was a bit outdated.

\n

After I upgraded the pip version with

\n
pip install --upgrade protobuf\n
\n

the problem was solved.

\n", + "system": "" + }, + { + "instruction": "What does batch, repeat, and shuffle do with TensorFlow Dataset?", + "input": "", + "output": "

Update: Here is a small collaboration notebook for demonstration of this answer.

\n
\n

Imagine, you have a dataset: [1, 2, 3, 4, 5, 6], then:

\n

How ds.shuffle() works

\n

dataset.shuffle(buffer_size=3) will allocate a buffer of size 3 for picking random entries. This buffer will be connected to the source dataset.\nWe could image it like this:

\n
Random buffer\n   |\n   |   Source dataset where all other elements live\n   |         |\n   \u2193         \u2193\n[1,2,3] <= [4,5,6]\n
\n

Let's assume that entry 2 was taken from the random buffer. Free space is filled by the next element from the source buffer, that is 4:

\n
2 <= [1,3,4] <= [5,6]\n
\n

We continue reading till nothing is left:

\n
1 <= [3,4,5] <= [6]\n5 <= [3,4,6] <= []\n3 <= [4,6]   <= []\n6 <= [4]     <= []\n4 <= []      <= []\n
\n

How ds.repeat() works

\n

As soon as all the entries are read from the dataset and you try to read the next element, the dataset will throw an error.\nThat's where ds.repeat() comes into play. It will re-initialize the dataset, making it again like this:

\n
[1,2,3] <= [4,5,6]\n
\n

What will ds.batch() produce

\n

The ds.batch() will take the first batch_size entries and make a batch out of them. So, a batch size of 3 for our example dataset will produce two batch records:

\n
[2,1,5]\n[3,6,4]\n
\n

As we have a ds.repeat() before the batch, the generation of the data will continue. But the order of the elements will be different, due to the ds.random(). What should be taken into account is that 6 will never be present in the first batch, due to the size of the random buffer.

\n", + "system": "" + }, + { + "instruction": "When importing tensorflow, I get the following error: No module named 'numpy.core._multiarray_umath'", + "input": "", + "output": "

I also had the same issue.\nIt got resloved once I upgraded the numpy from 1.15.4 to 1.16.1.

\n

If you're using pip:\npip install numpy --upgrade

\n

Numpy that came with Anaconda3 is of version 1.15.4. so i upgraded and it worked.

\n
\n

Side note: if you're also using scikit-image in your script, be aware that numpy 1.16.3 has a conflict with old versions of scikit-image (e.g. you may get ImportError: cannot import name '_validate_lengths'). In that case, pip install --upgrade scikit-image from terminal solved the issue for me.

\n", + "system": "" + }, + { + "instruction": "Using a pre-trained word embedding (word2vec or Glove) in TensorFlow", + "input": "", + "output": "

There are a few ways that you can use a pre-trained embedding in TensorFlow. Let's say that you have the embedding in a NumPy array called embedding, with vocab_size rows and embedding_dim columns and you want to create a tensor W that can be used in a call to tf.nn.embedding_lookup().

\n\n
    \n
  1. Simply create W as a tf.constant() that takes embedding as its value:

    \n\n
    W = tf.constant(embedding, name=\"W\")\n
    \n\n

    This is the easiest approach, but it is not memory efficient because the value of a tf.constant() is stored multiple times in memory. Since embedding can be very large, you should only use this approach for toy examples.

  2. \n
  3. Create W as a tf.Variable and initialize it from the NumPy array via a tf.placeholder():

    \n\n
    W = tf.Variable(tf.constant(0.0, shape=[vocab_size, embedding_dim]),\n                trainable=False, name=\"W\")\n\nembedding_placeholder = tf.placeholder(tf.float32, [vocab_size, embedding_dim])\nembedding_init = W.assign(embedding_placeholder)\n\n# ...\nsess = tf.Session()\n\nsess.run(embedding_init, feed_dict={embedding_placeholder: embedding})\n
    \n\n

    This avoid storing a copy of embedding in the graph, but it does require enough memory to keep two copies of the matrix in memory at once (one for the NumPy array, and one for the tf.Variable). Note that I've assumed that you want to hold the embedding matrix constant during training, so W is created with trainable=False.

  4. \n
  5. If the embedding was trained as part of another TensorFlow model, you can use a tf.train.Saver to load the value from the other model's checkpoint file. This means that the embedding matrix can bypass Python altogether. Create W as in option 2, then do the following:

    \n\n
    W = tf.Variable(...)\n\nembedding_saver = tf.train.Saver({\"name_of_variable_in_other_model\": W})\n\n# ...\nsess = tf.Session()\nembedding_saver.restore(sess, \"checkpoint_filename.ckpt\")\n
  6. \n
\n", + "system": "" + }, + { + "instruction": "How to stack multiple lstm in keras?", + "input": "", + "output": "

You need to add return_sequences=True to the first layer so that its output tensor has ndim=3 (i.e. batch size, timesteps, hidden state).

\n\n

Please see the following example:

\n\n
# expected input data shape: (batch_size, timesteps, data_dim)\nmodel = Sequential()\nmodel.add(LSTM(32, return_sequences=True,\n               input_shape=(timesteps, data_dim)))  # returns a sequence of vectors of dimension 32\nmodel.add(LSTM(32, return_sequences=True))  # returns a sequence of vectors of dimension 32\nmodel.add(LSTM(32))  # return a single vector of dimension 32\nmodel.add(Dense(10, activation='softmax'))\n
\n\n

From: https://keras.io/getting-started/sequential-model-guide/ (search for \"stacked lstm\")

\n", + "system": "" + }, + { + "instruction": "How to choose cross-entropy loss in TensorFlow?", + "input": "", + "output": "

Preliminary facts

\n\n\n\n

Sigmoid functions family

\n\n\n\n

As stated earlier, sigmoid loss function is for binary classification.\nBut tensorflow functions are more general and allow to do\nmulti-label classification, when the classes are independent.\nIn other words, tf.nn.sigmoid_cross_entropy_with_logits solves N\nbinary classifications at once.

\n\n

The labels must be one-hot encoded or can contain soft class probabilities.

\n\n

tf.losses.sigmoid_cross_entropy in addition allows to set the in-batch weights,\ni.e. make some examples more important than others.\ntf.nn.weighted_cross_entropy_with_logits allows to set class weights\n(remember, the classification is binary), i.e. make positive errors larger than\nnegative errors. This is useful when the training data is unbalanced.

\n\n

Softmax functions family

\n\n\n\n

These loss functions should be used for multinomial mutually exclusive classification,\ni.e. pick one out of N classes. Also applicable when N = 2.

\n\n

The labels must be one-hot encoded or can contain soft class probabilities:\na particular example can belong to class A with 50% probability and class B\nwith 50% probability. Note that strictly speaking it doesn't mean that\nit belongs to both classes, but one can interpret the probabilities this way.

\n\n

Just like in sigmoid family, tf.losses.softmax_cross_entropy allows\nto set the in-batch weights, i.e. make some examples more important than others.\nAs far as I know, as of tensorflow 1.3, there's no built-in way to set class weights.

\n\n

[UPD] In tensorflow 1.5, v2 version was introduced and the original softmax_cross_entropy_with_logits loss got deprecated. The only difference between them is that in a newer version, backpropagation happens into both logits and labels (here's a discussion why this may be useful).

\n\n

Sparse functions family

\n\n\n\n

Like ordinary softmax above, these loss functions should be used for\nmultinomial mutually exclusive classification, i.e. pick one out of N classes.\nThe difference is in labels encoding: the classes are specified as integers (class index),\nnot one-hot vectors. Obviously, this doesn't allow soft classes, but it\ncan save some memory when there are thousands or millions of classes.\nHowever, note that logits argument must still contain logits per each class,\nthus it consumes at least [batch_size, classes] memory.

\n\n

Like above, tf.losses version has a weights argument which allows\nto set the in-batch weights.

\n\n

Sampled softmax functions family

\n\n\n\n

These functions provide another alternative for dealing with huge number of classes.\nInstead of computing and comparing an exact probability distribution, they compute\na loss estimate from a random sample.

\n\n

The arguments weights and biases specify a separate fully-connected layer that\nis used to compute the logits for a chosen sample.

\n\n

Like above, labels are not one-hot encoded, but have the shape [batch_size, num_true].

\n\n

Sampled functions are only suitable for training. In test time, it's recommended to\nuse a standard softmax loss (either sparse or one-hot) to get an actual distribution.

\n\n

Another alternative loss is tf.nn.nce_loss, which performs noise-contrastive estimation (if you're interested, see this very detailed discussion). I've included this function to the softmax family, because NCE guarantees approximation to softmax in the limit.

\n", + "system": "" + }, + { + "instruction": "How to add regularizations in TensorFlow?", + "input": "", + "output": "

As you say in the second point, using the regularizer argument is the recommended way. You can use it in get_variable, or set it once in your variable_scope and have all your variables regularized.

\n

The losses are collected in the graph, and you need to manually add them to your cost function like this.

\n
  reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)\n  reg_constant = 0.01  # Choose an appropriate one.\n  loss = my_normal_loss + reg_constant * sum(reg_losses)\n
\n", + "system": "" + }, + { + "instruction": "Meaning of inter_op_parallelism_threads and intra_op_parallelism_threads", + "input": "", + "output": "

The inter_op_parallelism_threads and intra_op_parallelism_threads options are documented in the source of the tf.ConfigProto protocol buffer. These options configure two thread pools used by TensorFlow to parallelize execution, as the comments describe:

\n
// The execution of an individual op (for some op types) can be\n// parallelized on a pool of intra_op_parallelism_threads.\n// 0 means the system picks an appropriate number.\nint32 intra_op_parallelism_threads = 2;\n\n// Nodes that perform blocking operations are enqueued on a pool of\n// inter_op_parallelism_threads available in each process.\n//\n// 0 means the system picks an appropriate number.\n//\n// Note that the first Session created in the process sets the\n// number of threads for all future sessions unless use_per_session_threads is\n// true or session_inter_op_thread_pool is configured.\nint32 inter_op_parallelism_threads = 5;\n
\n

There are several possible forms of parallelism when running a TensorFlow graph, and these options provide some control multi-core CPU parallelism:

\n\n

Finally, both configuration options take a default value of 0, which means "the system picks an appropriate number." Currently, this means that each thread pool will have one thread per CPU core in your machine.

\n", + "system": "" + }, + { + "instruction": "How do display different runs in TensorBoard?", + "input": "", + "output": "

In addition to TensorBoard scanning subdirectories (so you can pass a directory containing the directories with your runs), you can also pass multiple directories to TensorBoard explicitly and give custom names (example taken from the --help output):

\n
tensorboard --logdir=name1:/path/to/logs/1,name2:/path/to/logs/2\n
\n

More information can be found at the TensorBoard documentation.

\n

In recent versions of TensorBoard, aliasing this way requires a different argument, however its use is discouraged (quote from current documentation on github - linked above):

\n
\n

Logdir & Logdir_spec (Legacy Mode)

\n

You may also pass a comma separated list of log directories, and\nTensorBoard will watch each directory. You can also assign names to\nindividual log directories by putting a colon between the name and the\npath, as in

\n

tensorboard --logdir_spec name1:/path/to/logs/1,name2:/path/to/logs/2

\n

This flag (--logdir_spec) is discouraged and can usually be avoided.\nTensorBoard walks log directories recursively; for finer-grained\ncontrol, prefer using a symlink tree. Some features may not work when\nusing --logdir_spec instead of --logdir.

\n
\n", + "system": "" + }, + { + "instruction": "How to remove cuda completely from ubuntu?", + "input": "", + "output": "

From cuda 11.4 onwards, an uninstaller script has been provided. Use it for the uninstallation:

\n
# To uninstall cuda\nsudo /usr/local/cuda-11.4/bin/cuda-uninstaller \n# To uninstall nvidia\nsudo /usr/bin/nvidia-uninstall\n
\n

If you are using cuda 11.3 or earlier refer to the section below for how to remove it.

\n

If you get the problem of broken packages, it has happened since you added repo to the apt/sources.lst. Run the following to delete it:

\n
sudo vim /etc/apt/sources.list\n
\n

Go to the line containing reference to Nvidia repo and comment it by appending # in front of the line, for e.g.:

\n
#deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /\n
\n

Then run

\n
sudo apt-get update \n
\n

This will fix the problem.

\n

Removing Cuda 11.3 and earlier

\n

There are two things- nvidia drivers and cuda toolkit- which you may want to remove.\nIf you have installed using apt-get use the following to remove the packages completely from the system:

\n

To remove cuda toolkit:

\n
sudo apt-get --purge remove "*cublas*" "cuda*" "nsight*" \n
\n

To remove Nvidia drivers:

\n
sudo apt-get --purge remove "*nvidia*"\n
\n

If you have installed via source files (assuming the default location to be /usr/local) then remove it using:

\n
sudo rm -rf /usr/local/cuda*\n
\n

References: Nvidia uninstallation

\n", + "system": "" + }, + { + "instruction": "What is the difference between Dataset.from_tensors and Dataset.from_tensor_slices?", + "input": "", + "output": "

from_tensors combines the input and returns a dataset with a single element:

\n
>>> t = tf.constant([[1, 2], [3, 4]])\n>>> ds = tf.data.Dataset.from_tensors(t)\n>>> [x for x in ds]\n[<tf.Tensor: shape=(2, 2), dtype=int32, numpy=\n array([[1, 2],\n        [3, 4]], dtype=int32)>]\n
\n

from_tensor_slices creates a dataset with a separate element for each row of the input tensor:

\n
>>> t = tf.constant([[1, 2], [3, 4]])\n>>> ds = tf.data.Dataset.from_tensor_slices(t)\n>>> [x for x in ds]\n[<tf.Tensor: shape=(2,), dtype=int32, numpy=array([1, 2], dtype=int32)>,\n <tf.Tensor: shape=(2,), dtype=int32, numpy=array([3, 4], dtype=int32)>]\n
\n", + "system": "" + }, + { + "instruction": "Higher validation accuracy, than training accurracy using Tensorflow and Keras", + "input": "", + "output": "

This happens when you use Dropout, since the behaviour when training and testing are different.

\n\n

When training, a percentage of the features are set to zero (50% in your case since you are using Dropout(0.5)). When testing, all features are used (and are scaled appropriately). So the model at test time is more robust - and can lead to higher testing accuracies.

\n", + "system": "" + }, + { + "instruction": "Using Keras & Tensorflow with AMD GPU", + "input": "", + "output": "

I'm writing an OpenCL 1.2 backend for Tensorflow at https://github.com/hughperkins/tensorflow-cl

\n\n

This fork of tensorflow for OpenCL has the following characteristics:

\n\n\n\n

This is not the only OpenCL fork of Tensorflow available. There is also a fork being developed by Codeplay https://www.codeplay.com , using Computecpp, https://www.codeplay.com/products/computesuite/computecpp Their fork has stronger requirements than my own, as far as I know, in terms of which specific GPU devices it works on. You would need to check the Platform Support Notes (at the bottom of hte computecpp page), to determine whether your device is supported. The codeplay fork is actually an official Google fork, which is here: https://github.com/benoitsteiner/tensorflow-opencl

\n", + "system": "" + }, + { + "instruction": "How can I use a pre-trained neural network with grayscale images?", + "input": "", + "output": "

The model's architecture cannot be changed because the weights have been trained for a specific input configuration. Replacing the first layer with your own would pretty much render the rest of the weights useless.

\n\n

-- Edit: elaboration suggested by Prune--
\nCNNs are built so that as they go deeper, they can extract high-level features derived from the lower-level features that the previous layers extracted. By removing the initial layers of a CNN, you are destroying that hierarchy of features because the subsequent layers won't receive the features that they are supposed to as their input. In your case the second layer has been trained to expect the features of the first layer. By replacing your first layer with random weights, you are essentially throwing away any training that has been done on the subsequent layers, as they would need to be retrained. I doubt that they could retain any of the knowledge learned during the initial training.
\n--- end edit ---

\n\n

There is an easy way, though, which you can make your model work with grayscale images. You just need to make the image to appear to be RGB. The easiest way to do so is to repeat the image array 3 times on a new dimension. Because you will have the same image over all 3 channels, the performance of the model should be the same as it was on RGB images.

\n\n

In numpy this can be easily done like this:

\n\n
print(grayscale_batch.shape)  # (64, 224, 224)\nrgb_batch = np.repeat(grayscale_batch[..., np.newaxis], 3, -1)\nprint(rgb_batch.shape)  # (64, 224, 224, 3)\n
\n\n

The way this works is that it first creates a new dimension (to place the channels) and then it repeats the existing array 3 times on this new dimension.

\n\n

I'm also pretty sure that keras' ImageDataGenerator can load grayscale images as RGB.

\n", + "system": "" + }, + { + "instruction": "What is the purpose of the Tensorflow Gradient Tape?", + "input": "", + "output": "

With eager execution enabled, Tensorflow will calculate the values of tensors as they occur in your code. This means that it won't precompute a static graph for which inputs are fed in through placeholders. This means to back propagate errors, you have to keep track of the gradients of your computation and then apply these gradients to an optimiser.

\n\n

This is very different from running without eager execution, where you would build a graph and then simply use sess.run to evaluate your loss and then pass this into an optimiser directly.

\n\n

Fundamentally, because tensors are evaluated immediately, you don't have a graph to calculate gradients and so you need a gradient tape. It is not so much that it is just used for visualisation, but more that you cannot implement a gradient descent in eager mode without it.

\n\n

Obviously, Tensorflow could just keep track of every gradient for every computation on every tf.Variable. However, that could be a huge performance bottleneck. They expose a gradient tape so that you can control what areas of your code need the gradient information. Note that in non-eager mode, this will be statically determined based on the computational branches that are descendants of your loss but in eager mode there is no static graph and so no way of knowing.

\n", + "system": "" + }, + { + "instruction": "On Windows, running "import tensorflow" generates No module named "_pywrap_tensorflow" error", + "input": "", + "output": "

The problem was the cuDNN Library for me - for whatever reason cudnn-8.0-windows10-x64-v6.0 was NOT working - I used cudnn-8.0-windows10-x64-v5.1 - ALL GOOD!

\n\n

My setup working with Win10 64 and the Nvidia GTX780M:

\n\n\n\n

If you run Windows 32 be sure to get the 32 bit versions of the files mentioned above.

\n", + "system": "" + }, + { + "instruction": "How to do Xavier initialization on TensorFlow", + "input": "", + "output": "

Since version 0.8 there is a Xavier initializer, see here for the docs.

\n\n

You can use something like this:

\n\n
W = tf.get_variable(\"W\", shape=[784, 256],\n           initializer=tf.contrib.layers.xavier_initializer())\n
\n", + "system": "" + }, + { + "instruction": "How to assign a value to a TensorFlow variable?", + "input": "", + "output": "

In TF1, the statement x.assign(1) does not actually assign the value 1 to x, but rather creates a tf.Operation that you have to explicitly run to update the variable.* A call to Operation.run() or Session.run() can be used to run the operation:

\n\n
assign_op = x.assign(1)\nsess.run(assign_op)  # or `assign_op.op.run()`\nprint(x.eval())\n# ==> 1\n
\n\n

(* In fact, it returns a tf.Tensor, corresponding to the updated value of the variable, to make it easier to chain assignments.)

\n\n

However, in TF2 x.assign(1) will now assign the value eagerly:

\n\n
x.assign(1)\nprint(x.numpy())\n# ==> 1\n
\n", + "system": "" + }, + { + "instruction": "How to *actually* read CSV data in TensorFlow?", + "input": "", + "output": "

I think you are mixing up imperative and graph-construction parts here. The operation tf.train.shuffle_batch creates a new queue node, and a single node can be used to process the entire dataset. So I think you are hanging because you created a bunch of shuffle_batch queues in your for loop and didn't start queue runners for them.

\n\n

Normal input pipeline usage looks like this:

\n\n
    \n
  1. Add nodes like shuffle_batch to input pipeline
  2. \n
  3. (optional, to prevent unintentional graph modification) finalize graph
  4. \n
\n\n

--- end of graph construction, beginning of imperative programming --

\n\n
    \n
  1. tf.start_queue_runners
  2. \n
  3. while(True): session.run()
  4. \n
\n\n

To be more scalable (to avoid Python GIL), you could generate all of your data using TensorFlow pipeline. However, if performance is not critical, you can hook up a numpy array to an input pipeline by using slice_input_producer. Here's an example with some Print nodes to see what's going on (messages in Print go to stdout when node is run)

\n\n
tf.reset_default_graph()\n\nnum_examples = 5\nnum_features = 2\ndata = np.reshape(np.arange(num_examples*num_features), (num_examples, num_features))\nprint data\n\n(data_node,) = tf.slice_input_producer([tf.constant(data)], num_epochs=1, shuffle=False)\ndata_node_debug = tf.Print(data_node, [data_node], \"Dequeueing from data_node \")\ndata_batch = tf.batch([data_node_debug], batch_size=2)\ndata_batch_debug = tf.Print(data_batch, [data_batch], \"Dequeueing from data_batch \")\n\nsess = tf.InteractiveSession()\nsess.run(tf.initialize_all_variables())\ntf.get_default_graph().finalize()\ntf.start_queue_runners()\n\ntry:\n  while True:\n    print sess.run(data_batch_debug)\nexcept tf.errors.OutOfRangeError as e:\n  print \"No more inputs.\"\n
\n\n

You should see something like this

\n\n
[[0 1]\n [2 3]\n [4 5]\n [6 7]\n [8 9]]\n[[0 1]\n [2 3]]\n[[4 5]\n [6 7]]\nNo more inputs.\n
\n\n

The \"8, 9\" numbers didn't fill up the full batch, so they didn't get produced. Also tf.Print are printed to sys.stdout, so they show up in separately in Terminal for me.

\n\n

PS: a minimal of connecting batch to a manually initialized queue is in github issue 2193

\n\n

Also, for debugging purposes you might want to set timeout on your session so that your IPython notebook doesn't hang on empty queue dequeues. I use this helper function for my sessions

\n\n
def create_session():\n  config = tf.ConfigProto(log_device_placement=True)\n  config.gpu_options.per_process_gpu_memory_fraction=0.3 # don't hog all vRAM\n  config.operation_timeout_in_ms=60000   # terminate on long hangs\n  # create interactive session to register a default session\n  sess = tf.InteractiveSession(\"\", config=config)\n  return sess\n
\n\n

Scalability Notes:

\n\n
    \n
  1. tf.constant inlines copy of your data into the Graph. There's a fundamental limit of 2GB on size of Graph definition so that's an upper limit on size of data
  2. \n
  3. You could get around that limit by using v=tf.Variable and saving the data into there by running v.assign_op with a tf.placeholder on right-hand side and feeding numpy array to the placeholder (feed_dict)
  4. \n
  5. That still creates two copies of data, so to save memory you could make your own version of slice_input_producer which operates on numpy arrays, and uploads rows one at a time using feed_dict
  6. \n
\n", + "system": "" + }, + { + "instruction": "In Keras, what exactly am I configuring when I create a stateful `LSTM` layer with N `units`?", + "input": "", + "output": "

You can check this question for further information, although it is based on Keras-1.x API.

\n\n

Basically, the unit means the dimension of the inner cells in LSTM. Because in LSTM, the dimension of inner cell (C_t and C_{t-1} in the graph), output mask (o_t in the graph) and hidden/output state (h_t in the graph) should have the SAME dimension, therefore you output's dimension should be unit-length as well.

\n\n

And LSTM in Keras only define exactly one LSTM block, whose cells is of unit-length. If you set return_sequence=True, it will return something with shape: (batch_size, timespan, unit). If false, then it just return the last output in shape (batch_size, unit).

\n\n

As for the input, you should provide input for every timestamp. Basically, the shape is like (batch_size, timespan, input_dim), where input_dim can be different from the unit. If you just want to provide input at the first step, you can simply pad your data with zeros at other time steps.

\n", + "system": "" + }, + { + "instruction": "RuntimeError: tf.placeholder() is not compatible with eager execution", + "input": "", + "output": "

I found an easy solution here: disable Tensorflow eager execution

\n\n

Basicaly it is:

\n\n

tf.compat.v1.disable_eager_execution()

\n\n

With this, you disable the default activate eager execution and you don't need to touch the code much more.

\n", + "system": "" + }, + { + "instruction": "Making predictions with a TensorFlow model", + "input": "", + "output": "

In the \"Deep MNIST for Experts\" example, see this line:

\n\n
\n

We can now implement our regression model. It only takes one line! We\n multiply the vectorized input images x by the weight matrix W, add the\n bias b, and compute the softmax probabilities that are assigned to\n each class.

\n\n
y = tf.nn.softmax(tf.matmul(x,W) + b)\n
\n
\n\n

Just pull on node y and you'll have what you want.

\n\n
feed_dict = {x: [your_image]}\nclassification = tf.run(y, feed_dict)\nprint classification\n
\n\n

This applies to just about any model you create - you'll have computed the prediction probabilities as one of the last steps before computing the loss.

\n", + "system": "" + }, + { + "instruction": "How to inspect a Tensorflow .tfrecord file?", + "input": "", + "output": "

Found it!

\n\n
import tensorflow as tf\n\nfor example in tf.python_io.tf_record_iterator(\"data/foobar.tfrecord\"):\n    print(tf.train.Example.FromString(example))\n
\n\n

You can also add:

\n\n
from google.protobuf.json_format import MessageToJson\n...\njsonMessage = MessageToJson(tf.train.Example.FromString(example))\n
\n", + "system": "" + }, + { + "instruction": "Clearing Tensorflow GPU memory after model execution", + "input": "", + "output": "

You can use numba library to release all the gpu memory

\n
pip install numba \n
\n
from numba import cuda \ndevice = cuda.get_current_device()\ndevice.reset()\n
\n

This will release all the memory

\n", + "system": "" + }, + { + "instruction": "How do I disable TensorFlow's eager execution?", + "input": "", + "output": "

Assume you are using Tensorflow 2.0 preview release which has eager execution enabled by default. There is a disable_eager_execution() in v1 API, which you can put in the front of your code like:

\n
import tensorflow as tf\n    \ntf.compat.v1.disable_eager_execution()\n
\n

On the other hand, if you are not using 2.0 preview, please check if you accidentally enabled eager execution somewhere.

\n", + "system": "" + }, + { + "instruction": "How to import keras from tf.keras in Tensorflow?", + "input": "", + "output": "

Use the keras module from tensorflow like this:

\n\n

import tensorflow as tf

\n\n

Import classes

\n\n

from tensorflow.python.keras.layers import Input, Dense

\n\n

or use directly

\n\n

dense = tf.keras.layers.Dense(...)

\n\n

EDIT Tensorflow 2

\n\n

from tensorflow.keras.layers import Input, Dense

\n\n

and the rest stays the same.

\n", + "system": "" + }, + { + "instruction": "Failed to get convolution algorithm. This is probably because cuDNN failed to initialize,", + "input": "", + "output": "

I've seen this error message for three different reasons, with different solutions:

\n

1. You have cache issues

\n

I regularly work around this error by shutting down my python process, removing the ~/.nv directory (on linux, rm -rf ~/.nv), and restarting the Python process. I don't exactly know why this works. It's probably at least partly related to the second option:

\n

2. You're out of memory

\n

The error can also show up if you run out of graphics card RAM. With an nvidia GPU you can check graphics card memory usage with nvidia-smi. This will give you a readout of how much GPU RAM you have in use (something like 6025MiB / 6086MiB if you're almost at the limit) as well as a list of what processes are using GPU RAM.

\n

If you've run out of RAM, you'll need to restart the process (which should free up the RAM) and then take a less memory-intensive approach. A few options are:

\n\n
import keras\nimport tensorflow as tf\n\nconfig = tf.ConfigProto()\nconfig.gpu_options.per_process_gpu_memory_fraction = 0.9  # 0.6 sometimes works better for folks\nkeras.backend.tensorflow_backend.set_session(tf.Session(config=config))\n
\n

This can slow down your model evaluation if not used together with the items above, presumably since the large data set will have to be swapped in and out to fit into the small amount of memory you've allocated.

\n

A second option is to have TensorFlow start out using only a minimum amount of memory and then allocate more as needed (documented here):

\n
os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true'\n
\n

3. You have incompatible versions of CUDA, TensorFlow, NVIDIA drivers, etc.

\n

If you've never had similar models working, you're not running out of VRAM and your cache is clean, I'd go back and set up CUDA + TensorFlow using the best available installation guide - I have had the most success with following the instructions at https://www.tensorflow.org/install/gpu rather than those on the NVIDIA / CUDA site. Lambda Stack is also a good way to go.

\n", + "system": "" + }, + { + "instruction": "NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0) to a numpy array", + "input": "", + "output": "

For me, the issue occurred when upgrading from numpy 1.19 to 1.20 and using ray's RLlib, which uses tensorflow 2.2 internally.\nSimply downgrading with

\n
pip install numpy==1.19.5\n
\n

solved the problem; the error did not occur anymore.

\n

Update (comment by @codeananda): You can also update to a newer TensorFlow (2.6+) version now that resolves the problem (pip install -U tensorflow).

\n", + "system": "" + }, + { + "instruction": "Get the value of some weights in a model trained by TensorFlow", + "input": "", + "output": "

In TensorFlow, trained weights are represented by tf.Variable objects. If you created a tf.Variable—e.g. called v—yourself, you can get its value as a NumPy array by calling sess.run(v) (where sess is a tf.Session).

\n\n

If you do not currently have a pointer to the tf.Variable, you can get a list of the trainable variables in the current graph by calling tf.trainable_variables(). This function returns a list of all trainable tf.Variable objects in the current graph, and you can select the one that you want by matching the v.name property. For example:

\n\n
# Desired variable is called \"tower_2/filter:0\".\nvar = [v for v in tf.trainable_variables() if v.name == \"tower_2/filter:0\"][0]\n
\n", + "system": "" + }, + { + "instruction": "How could I use batch normalization in TensorFlow?", + "input": "", + "output": "

Update July 2016 The easiest way to use batch normalization in TensorFlow is through the higher-level interfaces provided in either contrib/layers, tflearn, or slim.

\n\n

Previous answer if you want to DIY:\nThe documentation string for this has improved since the release - see the docs comment in the master branch instead of the one you found. It clarifies, in particular, that it's the output from tf.nn.moments.

\n\n

You can see a very simple example of its use in the batch_norm test code. For a more real-world use example, I've included below the helper class and use notes that I scribbled up for my own use (no warranty provided!):

\n\n
\"\"\"A helper class for managing batch normalization state.                   \n\nThis class is designed to simplify adding batch normalization               \n(http://arxiv.org/pdf/1502.03167v3.pdf) to your model by                    \nmanaging the state variables associated with it.                            \n\nImportant use note:  The function get_assigner() returns                    \nan op that must be executed to save the updated state.                      \nA suggested way to do this is to make execution of the                      \nmodel optimizer force it, e.g., by:                                         \n\n  update_assignments = tf.group(bn1.get_assigner(),                         \n                                bn2.get_assigner())                         \n  with tf.control_dependencies([optimizer]):                                \n    optimizer = tf.group(update_assignments)                                \n\n\"\"\"\n\nimport tensorflow as tf\n\n\nclass ConvolutionalBatchNormalizer(object):\n  \"\"\"Helper class that groups the normalization logic and variables.        \n\n  Use:                                                                      \n      ewma = tf.train.ExponentialMovingAverage(decay=0.99)                  \n      bn = ConvolutionalBatchNormalizer(depth, 0.001, ewma, True)           \n      update_assignments = bn.get_assigner()                                \n      x = bn.normalize(y, train=training?)                                  \n      (the output x will be batch-normalized).                              \n  \"\"\"\n\n  def __init__(self, depth, epsilon, ewma_trainer, scale_after_norm):\n    self.mean = tf.Variable(tf.constant(0.0, shape=[depth]),\n                            trainable=False)\n    self.variance = tf.Variable(tf.constant(1.0, shape=[depth]),\n                                trainable=False)\n    self.beta = tf.Variable(tf.constant(0.0, shape=[depth]))\n    self.gamma = tf.Variable(tf.constant(1.0, shape=[depth]))\n    self.ewma_trainer = ewma_trainer\n    self.epsilon = epsilon\n    self.scale_after_norm = scale_after_norm\n\n  def get_assigner(self):\n    \"\"\"Returns an EWMA apply op that must be invoked after optimization.\"\"\"\n    return self.ewma_trainer.apply([self.mean, self.variance])\n\n  def normalize(self, x, train=True):\n    \"\"\"Returns a batch-normalized version of x.\"\"\"\n    if train:\n      mean, variance = tf.nn.moments(x, [0, 1, 2])\n      assign_mean = self.mean.assign(mean)\n      assign_variance = self.variance.assign(variance)\n      with tf.control_dependencies([assign_mean, assign_variance]):\n        return tf.nn.batch_norm_with_global_normalization(\n            x, mean, variance, self.beta, self.gamma,\n            self.epsilon, self.scale_after_norm)\n    else:\n      mean = self.ewma_trainer.average(self.mean)\n      variance = self.ewma_trainer.average(self.variance)\n      local_beta = tf.identity(self.beta)\n      local_gamma = tf.identity(self.gamma)\n      return tf.nn.batch_norm_with_global_normalization(\n          x, mean, variance, local_beta, local_gamma,\n          self.epsilon, self.scale_after_norm)\n
\n\n

Note that I called it a ConvolutionalBatchNormalizer because it pins the use of tf.nn.moments to sum across axes 0, 1, and 2, whereas for non-convolutional use you might only want axis 0.

\n\n

Feedback appreciated if you use it.

\n", + "system": "" + }, + { + "instruction": "tensorflow:AttributeError: 'module' object has no attribute 'mul'", + "input": "", + "output": "

According to the tensorflow 1.0.0 release notes,

\n\n
\n

tf.mul, tf.sub and tf.neg are deprecated in favor of tf.multiply, tf.subtract and tf.negative.

\n
\n\n

You'll need to replace tf.mul with tf.multiply.

\n", + "system": "" + }, + { + "instruction": "What is the default kernel initializer in tf.layers.conv2d and tf.layers.dense?", + "input": "", + "output": "\n\n

Great question! It is quite a trick to find out!

\n\n\n\n

In code:

\n\n
self.kernel = vs.get_variable('kernel',\n                                  shape=kernel_shape,\n                                  initializer=self.kernel_initializer,\n                                  regularizer=self.kernel_regularizer,\n                                  trainable=True,\n                                  dtype=self.dtype)\n
\n\n

Next step: what does the variable scope do when the initializer is None?

\n\n

Here it says:

\n\n
\n

If initializer is None (the default), the default initializer passed in\n the constructor is used. If that one is None too, we use a new\n glorot_uniform_initializer.

\n
\n\n

So the answer is: it uses the glorot_uniform_initializer

\n\n

For completeness the definition of this initializer:

\n\n
\n

The Glorot uniform initializer, also called Xavier uniform initializer.\n It draws samples from a uniform distribution within [-limit, limit]\n where limit is sqrt(6 / (fan_in + fan_out))\n where fan_in is the number of input units in the weight tensor\n and fan_out is the number of output units in the weight tensor.\n Reference: http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf

\n
\n\n

Edit: this is what I found in the code and documentation. Perhaps you could verify that the initialization looks like this by running eval on the weights!

\n", + "system": "" + }, + { + "instruction": "What is the proper way to install TensorFlow on Apple M1 in 2022", + "input": "", + "output": "

Conda Environment YAMLs

\n

TensorFlow 2.13+

\n

Distilling the official directions from Apple (as of 24 November 2024), one would create an environment using the following YAML:

\n

tf-metal-arm64.yaml

\n
name: tf-metal\nchannels:\n  - conda-forge\n  - nodefaults\ndependencies:\n  - python=3.11  ## specify desired version\n  - pip\n\n  ## uncomment for use with Jupyter\n  ## - ipykernel\n\n  ## PyPI packages\n  - pip:\n    - tensorflow\n    - tensorflow-metal\n
\n

TensorFlow <= 2.12 (original directions)

\n

Distilling the official directions from Apple (as of 13 July 2022), one would create an environment using the following YAML:

\n

tf-metal-arm64.yaml

\n
name: tf-metal\nchannels:\n  - apple\n  - conda-forge\ndependencies:\n  - python=3.9  ## specify desired version\n  - pip\n  - tensorflow-deps\n\n  ## uncomment for use with Jupyter\n  ## - ipykernel\n\n  ## PyPI packages\n  - pip:\n    - tensorflow-macos\n    - tensorflow-metal  ## optional, but recommended\n
\n

Edit to include additional packages.

\n

Creating environment

\n

Before creating the environment we need to know what the base architecture is. Check this with conda config --show subdir.

\n

Native (osx-arm64) base

\n

If you have installed a native osx-arm64 Miniforge variant, then you can create with:

\n
conda env create -n my_tf_env -f tf-metal-arm64.yaml\n
\n

Emulated (osx-64) base

\n

If you do not have a native base, then you will need to override the subdir setting:

\n
## create env\nCONDA_SUBDIR=osx-arm64 conda env create -n my_tf_env -f tf-metal-arm64.yaml\n\n## activate\nconda activate my_tf_env\n\n## permanently set the subdir\nconda config --env --set subdir osx-arm64\n
\n

Be sure to always activate the environment before installing or updating packages.

\n", + "system": "" + }, + { + "instruction": "Can I measure the execution time of individual operations with TensorFlow?", + "input": "", + "output": "

I have used the Timeline object to get the time of execution for each node in the graph:

\n\n\n\n
\n\n

Here is an example program that measures the performance of a matrix multiplication:

\n\n
import tensorflow as tf\nfrom tensorflow.python.client import timeline\n\nx = tf.random_normal([1000, 1000])\ny = tf.random_normal([1000, 1000])\nres = tf.matmul(x, y)\n\n# Run the graph with full trace option\nwith tf.Session() as sess:\n    run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)\n    run_metadata = tf.RunMetadata()\n    sess.run(res, options=run_options, run_metadata=run_metadata)\n\n    # Create the Timeline object, and write it to a json\n    tl = timeline.Timeline(run_metadata.step_stats)\n    ctf = tl.generate_chrome_trace_format()\n    with open('timeline.json', 'w') as f:\n        f.write(ctf)\n
\n\n
\n\n

You can then open Google Chrome, go to the page chrome://tracing and load the timeline.json file.\nYou should see something like:

\n\n

\"timeline\"

\n", + "system": "" + }, + { + "instruction": "What's the difference between tf.Session() and tf.InteractiveSession()?", + "input": "", + "output": "

Mainly taken from official documentation:

\n\n
\n

The only difference with a regular Session is that an InteractiveSession installs itself as the default session on construction. The methods Tensor.eval() and Operation.run() will use that session to run ops.

\n
\n\n

This allows to use interactive context, like shell, as it avoids having to pass an explicit Session object to run op:

\n\n
sess = tf.InteractiveSession()\na = tf.constant(5.0)\nb = tf.constant(6.0)\nc = a * b\n# We can just use 'c.eval()' without passing 'sess'\nprint(c.eval())\nsess.close()\n
\n\n

It is also possible to say, that InteractiveSession supports less typing, as allows to run variables without needing to constantly refer to the session object.

\n", + "system": "" + }, + { + "instruction": "Gradient Descent vs Adagrad vs Momentum in TensorFlow", + "input": "", + "output": "

Here is a brief explanation based on my understanding:

\n\n\n\n

I would say that SGD, Momentum and Nesterov are inferior than the last 3.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: None of the MLIR optimization passes are enabled (registered 1)", + "input": "", + "output": "

MLIR is being used as another solution to implementing and optimizing Tensorflow logic. This informative message is benign and is saying MLIR was not being used. This is expected as in TF 2.3, the MLIR based implementation is still being developed and proven, so end users are generally not expected to use the MLIR implementation and are instead expected to use the non-MLIR feature complete implementation.

\n

Update: still experimental on version 2.9.1. On the docs it is written:

\n
\n

DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT.

\n
\n", + "system": "" + }, + { + "instruction": "AttributeError: module 'tensorflow' has no attribute 'ConfigProto'", + "input": "", + "output": "

ConfigProto disappeared in tf 2.0, so an elegant solution is:

\n\n
import tensorflow as tf\n
\n\n

and then replace:

\n\n

tf.ConfigProto by tf.compat.v1.ConfigProto

\n\n

In fact, the compatibility built in 2.0 to get tf 1.XX: tf.compat.v1 is really helpful.

\n\n

Useful link: \nMigrate your tensorflow 1. code to tensorflow 2.:\nhttps://www.tensorflow.org/guide/migrate

\n", + "system": "" + }, + { + "instruction": "Installing Python3.6 alongside Python3.7 on Mac", + "input": "", + "output": "

Try using brew for example if already using Python 3:

\n\n
$ brew unlink python\n
\n\n

Then install python 3.6.5:

\n\n
$ brew install --ignore-dependencies https://raw.githubusercontent.com/Homebrew/homebrew-core/f2a764ef944b1080be64bd88dca9a1d80130c558/Formula/python.rb\n
\n\n

To get back to python 3.7.4_1 use:

\n\n
$ brew switch python 3.7.4_1\n
\n\n

And if need 3.6 again switch with:

\n\n
$ brew switch python 3.6.5_1\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow: How do I convert a EagerTensor into a numpy array?", + "input": "", + "output": "

There is a .numpy() function which you can use, alternatively you could also do numpy.array(y). For example:

\n
import tensorflow as tf\nimport numpy as np\n\ntf.enable_eager_execution()\n\nx = tf.constant([1., 2.])\nprint(type(x))            # <type 'EagerTensor'>\nprint(type(x.numpy()))    # <type 'numpy.ndarray'>\nprint(type(np.array(x)))  # <type 'numpy.ndarray'>\n
\n

See the section in the eager execution guide.

\n", + "system": "" + }, + { + "instruction": "How to count total number of trainable parameters in a tensorflow model?", + "input": "", + "output": "

Loop over the shape of every variable in tf.trainable_variables().

\n\n
total_parameters = 0\nfor variable in tf.trainable_variables():\n    # shape is an array of tf.Dimension\n    shape = variable.get_shape()\n    print(shape)\n    print(len(shape))\n    variable_parameters = 1\n    for dim in shape:\n        print(dim)\n        variable_parameters *= dim.value\n    print(variable_parameters)\n    total_parameters += variable_parameters\nprint(total_parameters)\n
\n\n

Update: I wrote an article to clarify the dynamic/static shapes in Tensorflow because of this answer: https://pgaleone.eu/tensorflow/2018/07/28/understanding-tensorflow-tensors-shape-static-dynamic/

\n", + "system": "" + }, + { + "instruction": "TensorFlow: InternalError: Blas SGEMM launch failed", + "input": "", + "output": "

Old question, but may help others.
\nTry to close interactive sessions active in other processes (if IPython Notebook - just restart kernels). This helped me!

\nAdditionally, I use this code to close local sessions in this kernel during experiments:

\n\n
if 'session' in locals() and session is not None:\n    print('Close interactive session')\n    session.close()\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow Tensorboard default port", + "input": "", + "output": "

In fact there is an option to change the default port ...

\n\n
tensorboard --logdir=/tmp  --port=8008\n
\n", + "system": "" + }, + { + "instruction": "FailedPreconditionError: Attempting to use uninitialized in Tensorflow", + "input": "", + "output": "

The FailedPreconditionError arises because the program is attempting to read a variable (named \"Variable_1\") before it has been initialized. In TensorFlow, all variables must be explicitly initialized, by running their \"initializer\" operations. For convenience, you can run all of the variable initializers in the current session by executing the following statement before your training loop:

\n\n
tf.initialize_all_variables().run()\n
\n\n

Note that this answer assumes that, as in the question, you are using tf.InteractiveSession, which allows you to run operations without specifying a session. For non-interactive uses, it is more common to use tf.Session, and initialize as follows:

\n\n
init_op = tf.initialize_all_variables()\n\nsess = tf.Session()\nsess.run(init_op)\n
\n", + "system": "" + }, + { + "instruction": "How to export Keras .h5 to tensorflow .pb?", + "input": "", + "output": "

Keras does not include by itself any means to export a TensorFlow graph as a protocol buffers file, but you can do it using regular TensorFlow utilities. Here is a blog post explaining how to do it using the utility script freeze_graph.py included in TensorFlow, which is the \"typical\" way it is done.

\n\n

However, I personally find a nuisance having to make a checkpoint and then run an external script to obtain a model, and instead prefer to do it from my own Python code, so I use a function like this:

\n\n
def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):\n    \"\"\"\n    Freezes the state of a session into a pruned computation graph.\n\n    Creates a new computation graph where variable nodes are replaced by\n    constants taking their current value in the session. The new graph will be\n    pruned so subgraphs that are not necessary to compute the requested\n    outputs are removed.\n    @param session The TensorFlow session to be frozen.\n    @param keep_var_names A list of variable names that should not be frozen,\n                          or None to freeze all the variables in the graph.\n    @param output_names Names of the relevant graph outputs.\n    @param clear_devices Remove the device directives from the graph for better portability.\n    @return The frozen graph definition.\n    \"\"\"\n    graph = session.graph\n    with graph.as_default():\n        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))\n        output_names = output_names or []\n        output_names += [v.op.name for v in tf.global_variables()]\n        input_graph_def = graph.as_graph_def()\n        if clear_devices:\n            for node in input_graph_def.node:\n                node.device = \"\"\n        frozen_graph = tf.graph_util.convert_variables_to_constants(\n            session, input_graph_def, output_names, freeze_var_names)\n        return frozen_graph\n
\n\n

Which is inspired in the implementation of freeze_graph.py. The parameters are similar to the script too. session is the TensorFlow session object. keep_var_names is only needed if you want to keep some variable not frozen (e.g. for stateful models), so generally not. output_names is a list with the names of the operations that produce the outputs that you want. clear_devices just removes any device directives to make the graph more portable. So, for a typical Keras model with one output, you would do something like:

\n\n
from keras import backend as K\n\n# Create, compile and train model...\n\nfrozen_graph = freeze_session(K.get_session(),\n                              output_names=[out.op.name for out in model.outputs])\n
\n\n

Then you can write the graph to a file as usual with tf.train.write_graph:

\n\n
tf.train.write_graph(frozen_graph, \"some_directory\", \"my_model.pb\", as_text=False)\n
\n", + "system": "" + }, + { + "instruction": ""synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'." problem in TensorFlow", + "input": "", + "output": "

If you're using TF 2.0 a quick solution would be to downgrade your numpy to 1.16.4. (I used 1.17 and received the same warning messages).

\n\n
1. pip uninstall numpy \n2. pip install numpy==1.16.4\n
\n\n

See here (thanks to ymodak)

\n", + "system": "" + }, + { + "instruction": "pip3: command not found", + "input": "", + "output": "

You would need to install pip3.

\n

On Linux, run first sudo apt update. Then the command would be: sudo apt install python3-pip

\nOn Mac, using brew, first brew install python3
\nThen brew postinstall python3

\n

Try calling pip3 -V to see if it worked.

\n", + "system": "" + }, + { + "instruction": "Keras model.summary() object to string", + "input": "", + "output": "

With my version of Keras (2.0.6) and Python (3.5.0), this works for me:

\n\n
# Create an empty model\nfrom keras.models import Sequential\nmodel = Sequential()\n\n# Open the file\nwith open(filename + 'report.txt','w') as fh:\n    # Pass the file handle in as a lambda function to make it callable\n    model.summary(print_fn=lambda x: fh.write(x + '\\n'))\n
\n\n

This outputs the following lines to the file:

\n\n
_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\nTotal params: 0\nTrainable params: 0\nNon-trainable params: 0\n_________________________________________________________________\n
\n", + "system": "" + }, + { + "instruction": "How to "reset" tensorboard data after killing tensorflow instance", + "input": "", + "output": "

Note: The solution you've posted (erase TensorBoard's log files and kill the process) will work, but it isn't preferred, because it destroys historical information about your training.

\n\n

Instead, you can have each new training job write to a new subdirectory (of your top-level log directory). Then, TensorBoard will consider each job a new \"run\" and will create a nice comparison view so you can see how the training differed between iterations of your model.

\n\n

In the following an example from https://www.tensorflow.org/tensorboard/get_started:

\n\n
model = create_model()\n...\nmodel.compile(...)\n\nlog_dir = \"logs/fit/\" + datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\ntensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)\n\nmodel.fit(..., callbacks=[tensorboard_callback])\n
\n", + "system": "" + }, + { + "instruction": "What is the difference between variable_scope and name_scope?", + "input": "", + "output": "

I had problems understanding the difference between variable_scope and name_scope (they looked almost the same) before I tried to visualize everything by creating a simple example:

\n\n
import tensorflow as tf\ndef scoping(fn, scope1, scope2, vals):\n    with fn(scope1):\n        a = tf.Variable(vals[0], name='a')\n        b = tf.get_variable('b', initializer=vals[1])\n        c = tf.constant(vals[2], name='c')\n        with fn(scope2):\n            d = tf.add(a * b, c, name='res')\n\n        print '\\n  '.join([scope1, a.name, b.name, c.name, d.name]), '\\n'\n    return d\n\nd1 = scoping(tf.variable_scope, 'scope_vars', 'res', [1, 2, 3])\nd2 = scoping(tf.name_scope,     'scope_name', 'res', [1, 2, 3])\n\nwith tf.Session() as sess:\n    writer = tf.summary.FileWriter('logs', sess.graph)\n    sess.run(tf.global_variables_initializer())\n    print sess.run([d1, d2])\n    writer.close()\n
\n\n

Here I create a function that creates some variables and constants and groups them in scopes (depending by the type I provided). In this function I also print the names of all the variables. After that I executes the graph to get values of the resulting values and save event-files to investigate them in tensorboard. If you run this, you will get the following:

\n\n
scope_vars\n  scope_vars/a:0\n  scope_vars/b:0\n  scope_vars/c:0\n  scope_vars/res/res:0 \n\nscope_name\n  scope_name/a:0\n  b:0\n  scope_name/c:0\n  scope_name/res/res:0 \n
\n\n

You see the similar pattern if you open TB (as you see b is outside of scope_name rectangular):\n\"enter

\n\n
\n\n

This gives you the answer:

\n\n

Now you see that tf.variable_scope() adds a prefix to the names of all variables (no matter how you create them), ops, constants. On the other hand tf.name_scope() ignores variables created with tf.get_variable() because it assumes that you know which variable and in which scope you wanted to use.

\n\n

A good documentation on Sharing variables tells you that

\n\n
\n

tf.variable_scope(): Manages namespaces for names passed to tf.get_variable().

\n
\n\n

The same documentation provides a more details how does Variable Scope work and when it is useful.

\n", + "system": "" + }, + { + "instruction": "Tensorflow doesn't seem to see my gpu", + "input": "", + "output": "

I came across this same issue in jupyter notebooks. This could be an easy fix.

\n
$ pip uninstall tensorflow\n$ pip install tensorflow-gpu\n
\n

You can check if it worked with:

\n
tf.test.gpu_device_name()\n
\n

Update 2020

\n

It seems like tensorflow 2.0+ comes with gpu capabilities therefore\npip install tensorflow should be enough

\n", + "system": "" + }, + { + "instruction": "Tensorflow One Hot Encoder?", + "input": "", + "output": "

As of TensorFlow 0.8, there is now a native one-hot op, tf.one_hot that can convert a set of sparse labels to a dense one-hot representation. This is in addition to tf.nn.sparse_softmax_cross_entropy_with_logits, which can in some cases let you compute the cross entropy directly on the sparse labels instead of converting them to one-hot.

\n\n

Previous answer, in case you want to do it the old way:\n@Salvador's answer is correct - there (used to be) no native op to do it. Instead of doing it in numpy, though, you can do it natively in tensorflow using the sparse-to-dense operators:

\n\n
num_labels = 10\n\n# label_batch is a tensor of numeric labels to process\n# 0 <= label < num_labels\n\nsparse_labels = tf.reshape(label_batch, [-1, 1])\nderived_size = tf.shape(label_batch)[0]\nindices = tf.reshape(tf.range(0, derived_size, 1), [-1, 1])\nconcated = tf.concat(1, [indices, sparse_labels])\noutshape = tf.pack([derived_size, num_labels])\nlabels = tf.sparse_to_dense(concated, outshape, 1.0, 0.0)\n
\n\n

The output, labels, is a one-hot matrix of batch_size x num_labels.

\n\n

Note also that as of 2016-02-12 (which I assume will eventually be part of a 0.7 release), TensorFlow also has the tf.nn.sparse_softmax_cross_entropy_with_logits op, which in some cases can let you do training without needing to convert to a one-hot encoding.

\n\n

Edited to add: At the end, you may need to explicitly set the shape of labels. The shape inference doesn't recognize the size of the num_labels component. If you don't need a dynamic batch size with derived_size, this can be simplified.

\n\n

Edited 2016-02-12 to change the assignment of outshape per comment below.

\n", + "system": "" + }, + { + "instruction": "Split a dataset created by Tensorflow dataset API in to Train and Test?", + "input": "", + "output": "

Assuming you have all_dataset variable of tf.data.Dataset type:

\n\n
test_dataset = all_dataset.take(1000) \ntrain_dataset = all_dataset.skip(1000)\n
\n\n

Test dataset now has first 1000 elements and the rest goes for training.

\n", + "system": "" + }, + { + "instruction": "Keras - Difference between categorical_accuracy and sparse_categorical_accuracy", + "input": "", + "output": "

So in categorical_accuracy you need to specify your target (y) as one-hot encoded vector (e.g. in case of 3 classes, when a true class is second class, y should be (0, 1, 0). In sparse_categorical_accuracy you need should only provide an integer of the true class (in the case from previous example - it would be 1 as classes indexing is 0-based).

\n", + "system": "" + }, + { + "instruction": "In TensorFlow, what is tf.identity used for?", + "input": "", + "output": "

After some stumbling I think I've noticed a single use case that fits all the examples I've seen. If there are other use cases, please elaborate with an example.

\n\n

Use case:

\n\n

Suppose you'd like to run an operator every time a particular Variable is evaluated. For example, say you'd like to add one to x every time the variable y is evaluated. It might seem like this will work:

\n\n
x = tf.Variable(0.0)\nx_plus_1 = tf.assign_add(x, 1)\n\nwith tf.control_dependencies([x_plus_1]):\n    y = x\ninit = tf.initialize_all_variables()\n\nwith tf.Session() as session:\n    init.run()\n    for i in xrange(5):\n        print(y.eval())\n
\n\n

It doesn't: it'll print 0, 0, 0, 0, 0. Instead, it seems that we need to add a new node to the graph within the control_dependencies block. So we use this trick:

\n\n
x = tf.Variable(0.0)\nx_plus_1 = tf.assign_add(x, 1)\n\nwith tf.control_dependencies([x_plus_1]):\n    y = tf.identity(x)\ninit = tf.initialize_all_variables()\n\nwith tf.Session() as session:\n    init.run()\n    for i in xrange(5):\n        print(y.eval())\n
\n\n

This works: it prints 1, 2, 3, 4, 5.

\n\n

If in the CIFAR-10 tutorial we dropped tf.identity, then loss_averages_op would never run.

\n", + "system": "" + }, + { + "instruction": "Module 'tensorflow' has no attribute 'contrib'", + "input": "", + "output": "

tf.contrib has moved out of TF starting TF 2.0 alpha.
\nTake a look at these tf 2.0 release notes https://github.com/tensorflow/tensorflow/releases/tag/v2.0.0-alpha0
\nYou can upgrade your TF 1.x code to TF 2.x using the tf_upgrade_v2 script\nhttps://www.tensorflow.org/alpha/guide/upgrade

\n", + "system": "" + }, + { + "instruction": "tf.data.Dataset: how to get the dataset size (number of elements in an epoch)?", + "input": "", + "output": "

len(list(dataset)) works in eager mode, although that's obviously not a good general solution.

\n", + "system": "" + }, + { + "instruction": "Why the 6 in relu6?", + "input": "", + "output": "

From this reddit thread:

\n\n
\n

This is useful in making the networks ready for fixed-point inference.\n If you unbound the upper limit, you lose too many bits to the Q part\n of a Q.f number. Keeping the ReLUs bounded by 6 will let them take a\n max of 3 bits (upto 8) leaving 4/5 bits for .f

\n
\n\n

It seems, then, that 6 is just an arbitrary value chosen according to the number of bits you want to be able to compress your network's trained parameters into.\nAs per the \"why\" only the version with value 6 is implemented, I assume it's because that's the value that fits best in 8 bits, which probably is the most common use-case.

\n", + "system": "" + }, + { + "instruction": "Unbalanced data and weighted cross entropy", + "input": "", + "output": "

Note that weighted_cross_entropy_with_logits is the weighted variant of sigmoid_cross_entropy_with_logits. Sigmoid cross entropy is typically used for binary classification. Yes, it can handle multiple labels, but sigmoid cross entropy basically makes a (binary) decision on each of them -- for example, for a face recognition net, those (not mutually exclusive) labels could be \"Does the subject wear glasses?\", \"Is the subject female?\", etc.

\n\n

In binary classification(s), each output channel corresponds to a binary (soft) decision. Therefore, the weighting needs to happen within the computation of the loss. This is what weighted_cross_entropy_with_logits does, by weighting one term of the cross-entropy over the other.

\n\n

In mutually exclusive multilabel classification, we use softmax_cross_entropy_with_logits, which behaves differently: each output channel corresponds to the score of a class candidate. The decision comes after, by comparing the respective outputs of each channel.

\n\n

Weighting in before the final decision is therefore a simple matter of modifying the scores before comparing them, typically by multiplication with weights. For example, for a ternary classification task,

\n\n
# your class weights\nclass_weights = tf.constant([[1.0, 2.0, 3.0]])\n# deduce weights for batch samples based on their true label\nweights = tf.reduce_sum(class_weights * onehot_labels, axis=1)\n# compute your (unweighted) softmax cross entropy loss\nunweighted_losses = tf.nn.softmax_cross_entropy_with_logits(onehot_labels, logits)\n# apply the weights, relying on broadcasting of the multiplication\nweighted_losses = unweighted_losses * weights\n# reduce the result to get your final loss\nloss = tf.reduce_mean(weighted_losses)\n
\n\n

You could also rely on tf.losses.softmax_cross_entropy to handle the last three steps.

\n\n

In your case, where you need to tackle data imbalance, the class weights could indeed be inversely proportional to their frequency in your train data. Normalizing them so that they sum up to one or to the number of classes also makes sense.

\n\n

Note that in the above, we penalized the loss based on the true label of the samples. We could also have penalized the loss based on the estimated labels by simply defining

\n\n
weights = class_weights\n
\n\n

and the rest of the code need not change thanks to broadcasting magic.

\n\n

In the general case, you would want weights that depend on the kind of error you make. In other words, for each pair of labels X and Y, you could choose how to penalize choosing label X when the true label is Y. You end up with a whole prior weight matrix, which results in weights above being a full (num_samples, num_classes) tensor. This goes a bit beyond what you want, but it might be useful to know nonetheless that only your definition of the weight tensor need to change in the code above.

\n", + "system": "" + }, + { + "instruction": "TensorFlow - Importing data from a TensorBoard TFEvent file?", + "input": "", + "output": "

As Fabrizio says, TensorBoard is a great tool for visualizing the contents of your summary logs. However, if you want to perform a custom analysis, you can use tf.train.summary_iterator() function to loop over all of the tf.Event and tf.Summary protocol buffers in the log:

\n\n
for summary in tf.train.summary_iterator(\"/path/to/log/file\"):\n    # Perform custom processing in here.\n
\n\n

UPDATE for tf2:

\n\n
from tensorflow.python.summary.summary_iterator import summary_iterator\n
\n\n

You need to import it, that module level is not currently imported by default. On 2.0.0-rc2

\n", + "system": "" + }, + { + "instruction": "How to add if condition in a TensorFlow graph?", + "input": "", + "output": "

You're correct that the if statement doesn't work here, because the condition is evaluated at graph construction time, whereas presumably you want the condition to depend on the value fed to the placeholder at runtime. (In fact, it will always take the first branch, because condition > 0 evaluates to a Tensor, which is \"truthy\" in Python.)

\n\n

To support conditional control flow, TensorFlow provides the tf.cond() operator, which evaluates one of two branches, depending on a boolean condition. To show you how to use it, I'll rewrite your program so that condition is a scalar tf.int32 value for simplicity:

\n\n
x = tf.placeholder(tf.float32, shape=[None, ins_size**2*3], name=\"x_input\")\ncondition = tf.placeholder(tf.int32, shape=[], name=\"condition\")\nW = tf.Variable(tf.zeros([ins_size**2 * 3, label_option]), name=\"weights\")\nb = tf.Variable(tf.zeros([label_option]), name=\"bias\")\n\ny = tf.cond(condition > 0, lambda: tf.matmul(x, W) + b, lambda: tf.matmul(x, W) - b)\n
\n", + "system": "" + }, + { + "instruction": "How do I use TensorFlow GPU?", + "input": "", + "output": "

Follow this tutorial Tensorflow GPU I did it and it works perfect.

\n

Attention! - install version 9.0! newer version is not supported by Tensorflow-gpu

\n

Steps:

\n
    \n
  1. Uninstall your old tensorflow
  2. \n
  3. Install tensorflow-gpu pip install tensorflow-gpu
  4. \n
  5. Install Nvidia Graphics Card & Drivers (you probably already have)
  6. \n
  7. Download & Install CUDA
  8. \n
  9. Download & Install cuDNN
  10. \n
  11. Verify by simple program
  12. \n
\n
from tensorflow.python.client import device_lib \nprint(device_lib.list_local_devices())\n
\n", + "system": "" + }, + { + "instruction": "Nvidia Cudatoolkit vs Conda Cudatoolkit", + "input": "", + "output": "

If using anaconda to install tensorflow-gpu, yes it will install cuda and cudnn for you in same conda environment as tensorflow-gpu. All you need to install yourself is the latest nvidia-driver (so that it works with the latest CUDA level and all older CUDA levels you use.)

\n\n

This has many advantages over the pip install tensorflow-gpu method:

\n\n
    \n
  1. Anaconda will always install the CUDA and CuDNN version that the TensorFlow code was compiled to use.
  2. \n
  3. You can have multiple conda environments with different levels of TensorFlow, CUDA, and CuDNN and just use conda activate to switch between them.
  4. \n
  5. You don't have to deal with installing CUDA and cuDNN manaually at the system wide level.
  6. \n
\n\n

The disadvantage when compared to pip install tensorflow-gpu, is the latest version of tensorflow is added to pypi weeks before Anaconda is able to update the conda recipe and publish their builds of the latest TensorFlow version.

\n", + "system": "" + }, + { + "instruction": ""Could not interpret optimizer identifier" error in Keras", + "input": "", + "output": "

The reason is you are using tensorflow.python.keras API for model and layers and keras.optimizers for SGD. They are two different Keras versions of TensorFlow and pure Keras. They could not work together. You have to change everything to one version. Then it should work.

\n", + "system": "" + }, + { + "instruction": "How do I check if keras is using gpu version of tensorflow?", + "input": "", + "output": "

You are using the GPU version. You can list the available tensorflow devices with (also check this question):

\n\n
from tensorflow.python.client import device_lib\nprint(device_lib.list_local_devices()) # list of DeviceAttributes\n
\n\n

EDIT:

\n\n

With tensorflow >= 1.4 you can run the following function:

\n\n
import tensorflow as tf\ntf.test.is_gpu_available() # True/False\n\n# Or only check for gpu's with cuda support\ntf.test.is_gpu_available(cuda_only=True) \n
\n\n

EDIT 2:

\n\n

The above function is deprecated in tensorflow > 2.1. Instead you should use the following function:

\n\n
import tensorflow as tf\ntf.config.list_physical_devices('GPU')\n
\n\n
\n\n

NOTE:

\n\n

In your case both the cpu and gpu are available, if you use the cpu version of tensorflow the gpu will not be listed. In your case, without setting your tensorflow device (with tf.device(\"..\")), tensorflow will automatically pick your gpu!

\n\n

In addition, your sudo pip3 list clearly shows you are using tensorflow-gpu. If you would have the tensoflow cpu version the name would be something like tensorflow(1.1.0).

\n\n

Check this issue for information about the warnings.

\n", + "system": "" + }, + { + "instruction": "AttributeError: module 'tensorflow' has no attribute 'reset_default_graph'", + "input": "", + "output": "

This function is deprecated. \nUse tf.compat.v1.reset_default_graph() instead.

\n\n

Update\nThis is not the only function to be out of date. Check out this answer for release notes and a conversion script.

\n", + "system": "" + }, + { + "instruction": "Convert Keras model to C++", + "input": "", + "output": "

To answer my own question and have a solution - I wrote a plain c++ solution called keras2cpp (its code available on github).

\n\n

In this solution you store network architecture (in json) and weights (in hdf5). Then you can dump a network to a plain text file with provided script. You can use obtained text file with network in pure c++ code. There are no dependencies on python libraries or hdf5. It should work for theano and tensorflow backend.

\n", + "system": "" + }, + { + "instruction": "Remove nodes from graph or reset entire default graph", + "input": "", + "output": "

Update 11/2/2016

\n\n

tf.reset_default_graph()

\n\n

Old stuff

\n\n

There's reset_default_graph, but not part of public API (I think it should be, does someone wants to file an issue on GitHub?)

\n\n

My work-around to reset things is this:

\n\n
from tensorflow.python.framework import ops\nops.reset_default_graph()\nsess = tf.InteractiveSession()\n
\n", + "system": "" + }, + { + "instruction": "What's the difference between scikit-learn and tensorflow? Is it possible to use them together?", + "input": "", + "output": "

The Tensorflow is a library for constructing Neural Networks. The scikit-learn contains ready to use algorithms. The TF can work with a variety of data types: tabular, text, images, audio. The scikit-learn is intended to work with tabular data.

\n

Yes, you can use both packages. But if you need only classic Multi-Layer implementation then the MLPClassifier and MLPRegressor available in scikit-learn is a very good choice. I have run a comparison of MLP implemented in TF vs Scikit-learn and there weren't significant differences and scikit-learn MLP works about 2 times faster than TF on CPU. You can read the details of the comparison in my blog post.

\n

Below the scatter plots of performance comparison:

\n

\"Tensorflow

\n

\"Tensorflow

\n", + "system": "" + }, + { + "instruction": "How to define max_queue_size, workers and use_multiprocessing in keras fit_generator()?", + "input": "", + "output": "

Q_0:

\n\n
\n

Question: Does this refer to how many batches are prepared on CPU? How is it related to workers? How to define it optimally?

\n
\n\n

From the link you posted, you can learn that your CPU keeps creating batches until the queue is at the maximum queue size or reaches the stop. You want to have batches ready for your GPU to \"take\" so that the GPU doesn't have to wait for the CPU. \nAn ideal value for the queue size would be to make it large enough that your GPU is always running near the maximum and never has to wait for the CPU to prepare new batches.

\n\n

Q_1:

\n\n
\n

Question: How do I find out how many batches my CPU can/should generate in parallel?

\n
\n\n

If you see that your GPU is idling and waiting for batches, try to increase the amount of workers and perhaps also the queue size.

\n\n

Q_2:

\n\n
\n

Do I have to set this parameter to true if I change workers? Does it relate to CPU usage?

\n
\n\n

Here is a practical analysis of what happens when you set it to True or False. Here is a recommendation to set it to False to prevent freezing (in my setup True works fine without freezing). Perhaps someone else can increase our understanding of the topic.

\n\n

In summary:

\n\n

Try not to have a sequential setup, try to enable the CPU to provide enough data for the GPU.\n\"\"

\n\n

Also: You could (should?) create several questions the next time, so that it is easier to answer them.

\n", + "system": "" + }, + { + "instruction": "Dimension of shape in conv1D", + "input": "", + "output": "

td; lr you need to reshape you data to have a spatial dimension for Conv1d to make sense:

\n
X = np.expand_dims(X, axis=2) # reshape (569, 30) to (569, 30, 1) \n# now input can be set as \nmodel.add(Conv1D(2,2,activation='relu',input_shape=(30, 1))\n
\n

Essentially reshaping a dataset that looks like this:

\n
features    \n.8, .1, .3  \n.2, .4, .6  \n.7, .2, .1  \n
\n

To:

\n
[[.8\n.1\n.3],\n\n[.2,\n .4,\n .6\n ],\n\n[.7,\n .2,\n .1]]\n \n
\n

Explanation and examples

\n

Normally convolution works over spatial dimensions. The kernel is "convolved" over the dimension producing a tensor. In the case of Conv1D, the kernel is passed over the 'steps' dimension of every example.

\n

You will see Conv1D used in NLP where steps is a number of words in the sentence (padded to some fixed maximum length). The words would be encoded as vectors of length 4.

\n

Here is an example sentence:

\n
jack   .1   .3   -.52   |\nis     .05  .8,  -.7    |<--- kernel is `convolving` along this dimension.\na      .5   .31  -.2    |\nboy    .5   .8   -.4   \\|/\n
\n

And the way we would set the input to the conv in this case:

\n
maxlen = 4\ninput_dim = 3\nmodel.add(Conv1D(2,2,activation='relu',input_shape=(maxlen, input_dim))\n
\n

In your case, you will treat the features as the spatial dimensions with each feature having length 1. (see below)

\n

Here would be an example from your dataset

\n
att1   .04    |\natt2   .05    |  < -- kernel convolving along this dimension\natt3   .1     |       notice the features have length 1. each\natt4   .5    \\|/      example have these 4 featues.\n
\n

And we would set the Conv1D example as:

\n
maxlen = num_features = 4 # this would be 30 in your case\ninput_dim = 1 # since this is the length of _each_ feature (as shown above)\n\nmodel.add(Conv1D(2,2,activation='relu',input_shape=(maxlen, input_dim))\n
\n

As you see your dataset has to be reshaped in to (569, 30, 1)\nuse:

\n
X = np.expand_dims(X, axis=2) # reshape (569, 30, 1) \n# now input can be set as \nmodel.add(Conv1D(2,2,activation='relu',input_shape=(30, 1))\n
\n

Here is a full-fledged example that you can run (I'll use the Functional API)

\n
from keras.models import Model\nfrom keras.layers import Conv1D, Dense, MaxPool1D, Flatten, Input\nimport numpy as np\n\ninp =  Input(shape=(5, 1))\nconv = Conv1D(filters=2, kernel_size=2)(inp)\npool = MaxPool1D(pool_size=2)(conv)\nflat = Flatten()(pool)\ndense = Dense(1)(flat)\nmodel = Model(inp, dense)\nmodel.compile(loss='mse', optimizer='adam')\n\nprint(model.summary())\n\n# get some data\nX = np.expand_dims(np.random.randn(10, 5), axis=2)\ny = np.random.randn(10, 1)\n\n# fit model\nmodel.fit(X, y)\n
\n", + "system": "" + }, + { + "instruction": "tf.nn.conv2d vs tf.layers.conv2d", + "input": "", + "output": "

As GBY mentioned, they use the same implementation.

\n\n

There is a slight difference in the parameters.

\n\n

For tf.nn.conv2d:

\n\n
filter: A Tensor. Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels]\n
\n\n

For tf.layers.conv2d:

\n\n
filters: Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).\n
\n\n

I would use tf.nn.conv2d when loading a pretrained model (example code: https://github.com/ry/tensorflow-vgg16), and tf.layers.conv2d for a model trained from scratch.

\n", + "system": "" + }, + { + "instruction": "How to set specific gpu in tensorflow?", + "input": "", + "output": "

There are 3 ways to achieve this:

\n\n
    \n
  1. Using CUDA_VISIBLE_DEVICES environment variable.\nby setting environment variable CUDA_VISIBLE_DEVICES=\"1\" makes only device 1 visible and by setting CUDA_VISIBLE_DEVICES=\"0,1\" makes devices 0 and 1 visible. You can do this in python by having a line os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0,1\" after importing os package.

  2. \n
  3. Using with tf.device('/gpu:2') and creating the graph. Then it will use GPU device 2 to run.

  4. \n
  5. Using config = tf.ConfigProto(device_count = {'GPU': 1}) and then sess = tf.Session(config=config). This will use GPU device 1.

  6. \n
\n", + "system": "" + }, + { + "instruction": "TensorFlow - regularization with L2 loss, how to apply to all weights, not just last one?", + "input": "", + "output": "

A shorter and scalable way of doing this would be ;

\n\n
vars   = tf.trainable_variables() \nlossL2 = tf.add_n([ tf.nn.l2_loss(v) for v in vars ]) * 0.001\n
\n\n

This basically sums the l2_loss of all your trainable variables. You could also make a dictionary where you specify only the variables you want to add to your cost and use the second line above. Then you can add lossL2 with your softmax cross entropy value in order to calculate your total loss.

\n\n

Edit : As mentioned by Piotr Dabkowski, the code above will also regularise biases. This can be avoided by adding an if statement in the second line ;

\n\n
lossL2 = tf.add_n([ tf.nn.l2_loss(v) for v in vars\n                    if 'bias' not in v.name ]) * 0.001\n
\n\n

This can be used to exclude other variables.

\n", + "system": "" + }, + { + "instruction": "Simple way to visualize a TensorFlow graph in Jupyter?", + "input": "", + "output": "

Here's a recipe I copied from one of Alex Mordvintsev deep dream notebook at some point

\n\n
from IPython.display import clear_output, Image, display, HTML\nimport numpy as np    \n\ndef strip_consts(graph_def, max_const_size=32):\n    \"\"\"Strip large constant values from graph_def.\"\"\"\n    strip_def = tf.GraphDef()\n    for n0 in graph_def.node:\n        n = strip_def.node.add() \n        n.MergeFrom(n0)\n        if n.op == 'Const':\n            tensor = n.attr['value'].tensor\n            size = len(tensor.tensor_content)\n            if size > max_const_size:\n                tensor.tensor_content = \"<stripped %d bytes>\"%size\n    return strip_def\n\ndef show_graph(graph_def, max_const_size=32):\n    \"\"\"Visualize TensorFlow graph.\"\"\"\n    if hasattr(graph_def, 'as_graph_def'):\n        graph_def = graph_def.as_graph_def()\n    strip_def = strip_consts(graph_def, max_const_size=max_const_size)\n    code = \"\"\"\n        <script>\n          function load() {{\n            document.getElementById(\"{id}\").pbtxt = {data};\n          }}\n        </script>\n        <link rel=\"import\" href=\"https://tensorboard.appspot.com/tf-graph-basic.build.html\" onload=load()>\n        <div style=\"height:600px\">\n          <tf-graph-basic id=\"{id}\"></tf-graph-basic>\n        </div>\n    \"\"\".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))\n\n    iframe = \"\"\"\n        <iframe seamless style=\"width:1200px;height:620px;border:0\" srcdoc=\"{}\"></iframe>\n    \"\"\".format(code.replace('\"', '&quot;'))\n    display(HTML(iframe))\n
\n\n

Then to visualize current graph

\n\n
show_graph(tf.get_default_graph().as_graph_def())\n
\n\n

If your graph is saved as pbtxt, you could do

\n\n
gdef = tf.GraphDef()\nfrom google.protobuf import text_format\ntext_format.Merge(open(\"tf_persistent.pbtxt\").read(), gdef)\nshow_graph(gdef)\n
\n\n

You'll see something like this

\n\n

\"enter

\n", + "system": "" + }, + { + "instruction": "Tensorflow NaN bug?", + "input": "", + "output": "

Actually, it turned out to be something stupid. I'm posting this in case anyone else would run into a similar error.

\n\n
cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))\n
\n\n

is actually a horrible way of computing the cross-entropy. In some samples, certain classes could be excluded with certainty after a while, resulting in y_conv=0 for that sample. That's normally not a problem since you're not interested in those, but in the way cross_entropy is written there, it yields 0*log(0) for that particular sample/class. Hence the NaN.

\n\n

Replacing it with

\n\n
cross_entropy = -tf.reduce_sum(y_*tf.log(tf.clip_by_value(y_conv,1e-10,1.0)))\n
\n\n

solved all my problems.

\n", + "system": "" + }, + { + "instruction": "Tensorflow vs OpenCV", + "input": "", + "output": "

The main difference is that TensorFlow is a framework for machine learning, and OpenCV is a library for computer vision. It can be a good start to check the link below to get a grasp for the difference between framework and library: What is the difference between a framework and a library?

\n\n

You can do image recognition with TensorFlow. Though it is suited for more general problems as well, such as: classification, clustering and regression.

\n\n

I guess people downvoted because this question might be more relevant to: https://datascience.stackexchange.com/

\n", + "system": "" + }, + { + "instruction": "No module named 'tqdm'", + "input": "", + "output": "

You need to install tqdm module, you can do it by using python pip.

\n\n
pip install tqdm\n
\n\n

for more info tqdm

\n", + "system": "" + }, + { + "instruction": "How to understand static shape and dynamic shape in TensorFlow?", + "input": "", + "output": "

Sometimes the shape of a tensor depends on a value that is computed at runtime. Let's take the following example, where x is defined as a tf.placeholder() vector with four elements:

\n
x = tf.placeholder(tf.int32, shape=[4])\nprint x.get_shape()\n# ==> '(4,)'\n
\n

The value of x.get_shape() is the static shape of x, and the (4,) means that it is a vector of length 4. Now let's apply the tf.unique() op to x

\n
y, _ = tf.unique(x)\nprint y.get_shape()\n# ==> '(?,)'\n
\n

The (?,) means that y is a vector of unknown length. Why is it unknown? tf.unique(x) returns the unique values from x, and the values of x are unknown because it is a tf.placeholder(), so it doesn't have a value until you feed it. Let's see what happens if you feed two different values:

\n
sess = tf.Session()\nprint sess.run(y, feed_dict={x: [0, 1, 2, 3]}).shape\n# ==> '(4,)'\nprint sess.run(y, feed_dict={x: [0, 0, 0, 0]}).shape\n# ==> '(1,)'\n
\n

Hopefully this makes it clear that a tensor can have a different static and dynamic shape. The dynamic shape is always fully defined\u2014it has no ? dimensions\u2014but the static shape can be less specific. This is what allows TensorFlow to support operations like tf.unique() and tf.dynamic_partition(), which can have variable-sized outputs, and are used in advanced applications.

\n

Finally, the tf.shape() op can be used to get the dynamic shape of a tensor and use it in a TensorFlow computation:

\n
z = tf.shape(y)\nprint sess.run(z, feed_dict={x: [0, 1, 2, 3]})\n# ==> [4]\nprint sess.run(z, feed_dict={x: [0, 0, 0, 0]})\n# ==> [1]\n
\n

Here's a schematic image showing both:\n\"enter

\n", + "system": "" + }, + { + "instruction": "Error running basic tensorflow example", + "input": "", + "output": "

From the path in your stack trace (/git/tensorflow/tensorflow/\u2026), it looks like your Python path may be loading the tensorflow libraries from the source directory, rather than the version that you have installed. As a result, it is unable to find the (compiled) pywrap_tensorflow library, which is installed in a different directory.

\n\n

A common solution is to cd out of the /git/tensorflow directory before starting python or ipython.

\n", + "system": "" + }, + { + "instruction": "ValueError: Shapes (None, 1) and (None, 2) are incompatible", + "input": "", + "output": "

i was facing the same problem\nmy shapes were

\n
shape of X (271, 64, 64, 3)\nshape of y (271,)\nshape of trainX (203, 64, 64, 3)\nshape of trainY (203, 1)\nshape of testX (68, 64, 64, 3)\nshape of testY (68, 1)\n
\n

and

\n
loss="categorical_crossentropy"\n
\n

i changed it to

\n
loss="sparse_categorical_crossentropy"\n
\n

and it worked like a charm for me

\n", + "system": "" + }, + { + "instruction": "tf.data with multiple inputs / outputs in Keras", + "input": "", + "output": "

I'm not using Keras but I would go with an tf.data.Dataset.from_generator() - like:

\n\n
def _input_fn():\n  sent1 = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=np.int64)\n  sent2 = np.array([20, 25, 35, 40, 600, 30, 20, 30], dtype=np.int64)\n  sent1 = np.reshape(sent1, (8, 1, 1))\n  sent2 = np.reshape(sent2, (8, 1, 1))\n\n  labels = np.array([40, 30, 20, 10, 80, 70, 50, 60], dtype=np.int64)\n  labels = np.reshape(labels, (8, 1))\n\n  def generator():\n    for s1, s2, l in zip(sent1, sent2, labels):\n      yield {\"input_1\": s1, \"input_2\": s2}, l\n\n  dataset = tf.data.Dataset.from_generator(generator, output_types=({\"input_1\": tf.int64, \"input_2\": tf.int64}, tf.int64))\n  dataset = dataset.batch(2)\n  return dataset\n\n...\n\nmodel.fit(_input_fn(), epochs=10, steps_per_epoch=4)\n
\n\n

This generator can iterate over your e.g text-files / numpy arrays and yield on every call a example.\nIn this example, I assume that the word of the sentences are already converted to the indices in the vocabulary.

\n\n

Edit:\nSince OP asked, it should be also possible with Dataset.from_tensor_slices():

\n\n
def _input_fn():\n  sent1 = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=np.int64)\n  sent2 = np.array([20, 25, 35, 40, 600, 30, 20, 30], dtype=np.int64)\n  sent1 = np.reshape(sent1, (8, 1))\n  sent2 = np.reshape(sent2, (8, 1))\n\n  labels = np.array([40, 30, 20, 10, 80, 70, 50, 60], dtype=np.int64)\n  labels = np.reshape(labels, (8))\n\n  dataset = tf.data.Dataset.from_tensor_slices(({\"input_1\": sent1, \"input_2\": sent2}, labels))\n  dataset = dataset.batch(2, drop_remainder=True)\n  return dataset\n
\n", + "system": "" + }, + { + "instruction": "How to check if keras tensorflow backend is GPU or CPU version?", + "input": "", + "output": "

Also you can check using Keras backend function:

\n\n
from keras import backend as K\nK.tensorflow_backend._get_available_gpus()\n
\n\n

I test this on Keras (2.1.1)

\n", + "system": "" + }, + { + "instruction": "TensorFlow: Blas GEMM launch failed", + "input": "", + "output": "

This worked for me on TensorFlow 2.1.0 (per: https://www.tensorflow.org/api_docs/python/tf/config/experimental/set_memory_growth)

\n
import tensorflow as tf\nphysical_devices = tf.config.list_physical_devices('GPU') \nfor device in physical_devices:\n    tf.config.experimental.set_memory_growth(device, True)\n
\n", + "system": "" + }, + { + "instruction": "Negative dimension size caused by subtracting 3 from 1 for 'Conv2D'", + "input": "", + "output": "

Your issue comes from the image_ordering_dim in keras.json.

\n\n

From Keras Image Processing doc:

\n\n
\n

dim_ordering: One of {\"th\", \"tf\"}. \"tf\" mode means that the images should have shape (samples, height, width, channels), \"th\" mode means that the images should have shape (samples, channels, height, width). It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be \"tf\".

\n
\n\n

Keras maps the convolution operation to the chosen backend (theano or tensorflow). However, both backends have made different choices for the ordering of the dimensions. If your image batch is of N images of HxW size with C channels, theano uses the NCHW ordering while tensorflow uses the NHWC ordering.

\n\n

Keras allows you to choose which ordering you prefer and will do the conversion to map to the backends behind. But if you choose image_ordering_dim=\"th\" it expects Theano-style ordering (NCHW, the one you have in your code) and if image_ordering_dim=\"tf\" it expects tensorflow-style ordering (NHWC).

\n\n

Since your image_ordering_dim is set to \"tf\", if you reshape your data to the tensorflow style it should work:

\n\n
X_train = X_train.reshape(X_train.shape[0], img_cols, img_rows, 1)\nX_test = X_test.reshape(X_test.shape[0], img_cols, img_rows, 1)\n
\n\n

and

\n\n
input_shape=(img_cols, img_rows, 1)\n
\n", + "system": "" + }, + { + "instruction": "Is there a way to suppress the messages TensorFlow prints?", + "input": "", + "output": "

UPDATE (beyond 1.14): see my more thorough answer here (this is a dupe question anyway): https://stackoverflow.com/a/38645250/6557588

\n

In addition to Wintro's answer, you can also disable/suppress TensorFlow logs from the C side (i.e. the uglier ones starting with single characters: I, E, etc.); the issue open regarding logging has been updated to state that you can now control logging via an environmental variable. You can now change the level by setting the environmental variable called TF_CPP_MIN_LOG_LEVEL; it defaults to 0 (all logs shown), but can be set to 1 to filter out INFO logs, 2 to additionally filter out WARNING logs, and 3 to additionally filter out ERROR logs. It appears to be in master now, and will likely be a part of future version (i.e. versions after r0.11). See this page for more information. Here is an example of changing the verbosity using Python:

\n
import os\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'  # or any {'0', '1', '2'}\nimport tensorflow as tf\n
\n

You can set this environmental variable in the environment that you run your script in. For example, with bash this can be in the file ~/.bashrc, /etc/environment, /etc/profile, or in the actual shell as:

\n
TF_CPP_MIN_LOG_LEVEL=2 python my_tf_script.py\n
\n", + "system": "" + }, + { + "instruction": "tf.shape() get wrong shape in tensorflow", + "input": "", + "output": "

tf.shape(input, name=None) returns a 1-D integer tensor representing the shape of input.

\n\n

You're looking for: x.get_shape() that returns the TensorShape of the x variable.

\n\n

Update: I wrote an article to clarify the dynamic/static shapes in Tensorflow because of this answer: https://pgaleone.eu/tensorflow/2018/07/28/understanding-tensorflow-tensors-shape-static-dynamic/

\n", + "system": "" + }, + { + "instruction": "How to interpret Poolallocator messages in tensorflow?", + "input": "", + "output": "

TensorFlow has multiple memory allocators, for memory that will be used in different ways. Their behavior has some adaptive aspects.

\n\n

In your particular case, since you're using a GPU, there is a PoolAllocator for CPU memory that is pre-registered with the GPU for fast DMA. A tensor that is expected to be transferred from CPU to GPU, e.g., will be allocated from this pool.

\n\n

The PoolAllocators attempt to amortize the cost of calling a more expensive underlying allocator by keeping around a pool of allocated then freed chunks that are eligible for immediate reuse. Their default behavior is to grow slowly until the eviction rate drops below some constant. (The eviction rate is the proportion of free calls where we return an unused chunk from the pool to the underlying pool in order not to exceed the size limit.) In the log messages above, you see \"Raising pool_size_limit_\" lines that show the pool size growing. Assuming that your program actually has a steady state behavior with a maximum size collection of chunks it needs, the pool will grow to accommodate it, and then grow no more. It behaves this way rather than simply retaining all chunks ever allocated so that sizes needed only rarely, or only during program startup, are less likely to be retained in the pool.

\n\n

These messages should only be a cause for concern if you run out of memory. In such a case the log messages may help diagnose the problem. Note also that peak execution speed may only be attained after the memory pools have grown to the proper size.

\n", + "system": "" + }, + { + "instruction": "WARNING:tensorflow:sample_weight modes were coerced from ... to ['...']", + "input": "", + "output": "

This seems like a bogus message. I get the same warning message after upgrading to TensorFlow 2.1, but I do not use any class weights or sample weights at all. I do use a generator that returns a tuple like this:

\n\n
return inputs, targets\n
\n\n

And now I just changed it to the following to make the warning go away:

\n\n
return inputs, targets, [None]\n
\n\n

I don't know if this is relevant, but my model uses 3 inputs, so my inputs variable is actually a list of 3 numpy arrays. targets is just a single numpy array.

\n\n

In any case, it's just a warning. The training works fine either way.

\n\n

Edit for TensorFlow 2.2:

\n\n

This bug seems to have been fixed in TensorFlow 2.2, which is great. However the fix above will fail in TF 2.2, because it will try to get the shape of the sample weights, which will obviously fail with AttributeError: 'NoneType' object has no attribute 'shape'. So undo the above fix when upgrading to 2.2.

\n", + "system": "" + }, + { + "instruction": "Tensorflow Data Adapter Error: ValueError: Failed to find data adapter that can handle input", + "input": "", + "output": "

Have you checked whether your training/testing data and training/testing labels are all numpy arrays? It might be that you're mixing numpy arrays with lists.

\n", + "system": "" + }, + { + "instruction": "This model has not yet been built error on model.summary()", + "input": "", + "output": "

The error says what to do:

\n\n
\n

This model has not yet been built. Build the model first by calling build()

\n
\n\n
model.build(input_shape) # `input_shape` is the shape of the input data\n                         # e.g. input_shape = (None, 32, 32, 3)\nmodel.summary()\n
\n", + "system": "" + }, + { + "instruction": "Why can I not import Tensorflow.contrib I get an error of No module named 'tensorflow.python.saved", + "input": "", + "output": "

For anyone who is trying some old codes from github with Tensorflow 1.x.x versions while having Tensorflow 2.0.x please note that tf.contrib no longer exist in Tensorflow 2.0.x and it's modules were moved.
\nPlease google the name of the module without the tf.contrib part to know it's new location and thus migrating your code accordingly by correcting the import statement.

\n\n

Hope this helped!

\n", + "system": "" + }, + { + "instruction": "Keras ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5", + "input": "", + "output": "

The problem is input_shape.

\n\n

It should actually contain 3 dimensions only. And internally keras will add the batch dimension making it 4.

\n\n

Since you probably used input_shape with 4 dimensions (batch included), keras is adding the 5th.

\n\n

You should use input_shape=(32,32,1).

\n", + "system": "" + }, + { + "instruction": "Tensorflow dense gradient explanation?", + "input": "", + "output": "

This warning is printed when a sparse tf.IndexedSlices object is implicitly converted to a dense tf.Tensor. This typically happens when one op (usually tf.gather()) backpropagates a sparse gradient, but the op that receives it does not have a specialized gradient function that can handle sparse gradients. As a result, TensorFlow automatically densifies the tf.IndexedSlices, which can have a devastating effect on performance if the tensor is large.

\n\n

To fix this problem, you should try to ensure that the params input to tf.gather() (or the params inputs to tf.nn.embedding_lookup()) is a tf.Variable. Variables can receive the sparse updates directly, so no conversion is needed. Although tf.gather() (and tf.nn.embedding_lookup()) accept arbitrary tensors as inputs, this may lead to a more complicated backpropagation graph, resulting in implicit conversion.

\n", + "system": "" + }, + { + "instruction": "List of tensor names in graph in Tensorflow", + "input": "", + "output": "

The paper is not accurately reflecting the model. If you download the source from arxiv it has an accurate model description as model.txt, and the names in there correlate strongly with the names in the released model.

\n\n

To answer your first question, sess.graph.get_operations() gives you a list of operations. For an op, op.name gives you the name and op.values() gives you a list of tensors it produces (in the inception-v3 model, all tensor names are the op name with a \":0\" appended to it, so pool_3:0 is the tensor produced by the final pooling op.)

\n", + "system": "" + }, + { + "instruction": "How do I convert a directory of jpeg images to TFRecords file in tensorflow?", + "input": "", + "output": "

I hope this helps:

\n\n
filename_queue = tf.train.string_input_producer(['/Users/HANEL/Desktop/tf.png']) #  list of files to read\n\nreader = tf.WholeFileReader()\nkey, value = reader.read(filename_queue)\n\nmy_img = tf.image.decode_png(value) # use decode_png or decode_jpeg decoder based on your files.\n\ninit_op = tf.initialize_all_variables()\nwith tf.Session() as sess:\n  sess.run(init_op)\n\n# Start populating the filename queue.\n\ncoord = tf.train.Coordinator()\nthreads = tf.train.start_queue_runners(coord=coord)\n\nfor i in range(1): #length of your filename list\n  image = my_img.eval() #here is your image Tensor :) \n\nprint(image.shape)\nImage.show(Image.fromarray(np.asarray(image)))\n\ncoord.request_stop()\ncoord.join(threads)\n
\n\n

For getting all images as an array of tensors use the following code example.

\n\n

Github repo of ImageFlow

\n\n
\n\n

Update:

\n\n

In the previous answer I just told how to read an image in TF format, but not saving it in TFRecords. For that you should use:

\n\n
def _int64_feature(value):\n  return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))\n\n\ndef _bytes_feature(value):\n  return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))\n\n# images and labels array as input\ndef convert_to(images, labels, name):\n  num_examples = labels.shape[0]\n  if images.shape[0] != num_examples:\n    raise ValueError(\"Images size %d does not match label size %d.\" %\n                     (images.shape[0], num_examples))\n  rows = images.shape[1]\n  cols = images.shape[2]\n  depth = images.shape[3]\n\n  filename = os.path.join(FLAGS.directory, name + '.tfrecords')\n  print('Writing', filename)\n  writer = tf.python_io.TFRecordWriter(filename)\n  for index in range(num_examples):\n    image_raw = images[index].tostring()\n    example = tf.train.Example(features=tf.train.Features(feature={\n        'height': _int64_feature(rows),\n        'width': _int64_feature(cols),\n        'depth': _int64_feature(depth),\n        'label': _int64_feature(int(labels[index])),\n        'image_raw': _bytes_feature(image_raw)}))\n    writer.write(example.SerializeToString())\n
\n\n

More info here

\n\n

And you read the data like this:

\n\n
# Remember to generate a file name queue of you 'train.TFRecord' file path\ndef read_and_decode(filename_queue):\n  reader = tf.TFRecordReader()\n  _, serialized_example = reader.read(filename_queue)\n  features = tf.parse_single_example(\n    serialized_example,\n    dense_keys=['image_raw', 'label'],\n    # Defaults are not specified since both keys are required.\n    dense_types=[tf.string, tf.int64])\n\n  # Convert from a scalar string tensor (whose single string has\n  image = tf.decode_raw(features['image_raw'], tf.uint8)\n\n  image = tf.reshape(image, [my_cifar.n_input])\n  image.set_shape([my_cifar.n_input])\n\n  # OPTIONAL: Could reshape into a 28x28 image and apply distortions\n  # here.  Since we are not applying any distortions in this\n  # example, and the next step expects the image to be flattened\n  # into a vector, we don't bother.\n\n  # Convert from [0, 255] -> [-0.5, 0.5] floats.\n  image = tf.cast(image, tf.float32)\n  image = tf.cast(image, tf.float32) * (1. / 255) - 0.5\n\n  # Convert label from a scalar uint8 tensor to an int32 scalar.\n  label = tf.cast(features['label'], tf.int32)\n\n  return image, label\n
\n", + "system": "" + }, + { + "instruction": "How do I install TensorFlow's tensorboard?", + "input": "", + "output": "

The steps to install Tensorflow are here: https://www.tensorflow.org/install/

\n\n

For example, on Linux for CPU-only (no GPU), you would type this command:

\n\n
pip install -U pip\npip install tensorflow\n
\n\n

Since TensorFlow depends on TensorBoard, running the following command should not be necessary:

\n\n
pip install tensorboard\n
\n", + "system": "" + }, + { + "instruction": "What is the use of a *.pb file in TensorFlow and how does it work?", + "input": "", + "output": "

pb stands for protobuf. In TensorFlow, the protbuf file contains the graph definition as well as the weights of the model. Thus, a pb file is all you need to be able to run a given trained model.

\n

Given a pb file, you can load it as follows:

\n
def load_pb(path_to_pb):\n    with tf.gfile.GFile(path_to_pb, "rb") as f:\n        graph_def = tf.GraphDef()\n        graph_def.ParseFromString(f.read())\n    with tf.Graph().as_default() as graph:\n        tf.import_graph_def(graph_def, name='')\n        return graph\n
\n

Once you have loaded the graph, you can basically do anything. For instance, you can retrieve tensors of interest with

\n
input = graph.get_tensor_by_name('input:0')\noutput = graph.get_tensor_by_name('output:0')\n
\n

and use regular TensorFlow routine like:

\n
sess.run(output, feed_dict={input: some_data})\n
\n", + "system": "" + }, + { + "instruction": "Why is the accuracy for my Keras model always 0 when training?", + "input": "", + "output": "

Your model seems to correspond to a regression model for the following reasons:

\n\n\n\n

However, the metric that you use- metrics=['accuracy'] corresponds to a classification problem. If you want to do regression, remove metrics=['accuracy']. That is, use

\n\n
model.compile(optimizer='adam',loss='mean_squared_error')\n
\n\n

Here is a list of keras metrics for regression and classification (taken from this blog post):

\n\n
\n

Keras Regression Metrics

\n \n

\u2022Mean Squared Error: mean_squared_error, MSE or mse

\n \n

\u2022Mean Absolute Error: mean_absolute_error, MAE, mae

\n \n

\u2022Mean Absolute Percentage Error: mean_absolute_percentage_error, MAPE,\n mape

\n \n

\u2022Cosine Proximity: cosine_proximity, cosine

\n \n

Keras Classification Metrics

\n \n

\u2022Binary Accuracy: binary_accuracy, acc

\n \n

\u2022Categorical Accuracy: categorical_accuracy, acc

\n \n

\u2022Sparse Categorical Accuracy: sparse_categorical_accuracy

\n \n

\u2022Top k Categorical Accuracy: top_k_categorical_accuracy (requires you\n specify a k parameter)

\n \n

\u2022Sparse Top k Categorical Accuracy: sparse_top_k_categorical_accuracy\n (requires you specify a k parameter)

\n
\n", + "system": "" + }, + { + "instruction": "How to approach a number guessing game (with a twist) algorithm?", + "input": "", + "output": "

We'll combine graph-theory and probability:

\n\n

On the 1st day, build a set of all feasible solutions. Lets denote the solutions set as A1={a1(1), a1(2),...,a1(n)}.

\n\n

On the second day you can again build the solutions set A2.

\n\n

Now, for each element in A2, you'll need to check if it can be reached from each element of A1 (given x% tolerance). If so - connect A2(n) to A1(m). If it can't be reached from any node in A1(m) - you can delete this node.

\n\n

Basically we are building a connected directed acyclic graph.

\n\n

All paths in the graph are equally likely. You can find an exact solution only when there is a single edge from Am to Am+1 (from a node in Am to a node in Am+1).

\n\n

Sure, some nodes appear in more paths than other nodes. The probability for each node can be directly deduced based on the number of paths that contains this node.

\n\n

By assigning a weight to each node, which equals to the number of paths that leads to this node, there is no need to keep all history, but only the previous day.

\n\n

Also, have a look at non-negative-values linear diphantine equations - A question I asked a while ago. The accepted answer is a great way to enumarte all combos in each step.

\n", + "system": "" + }, + { + "instruction": ""Could not load dynamic library 'libcudnn.so.8'" when running tensorflow on ubuntu 20.04", + "input": "", + "output": "

So I had the same issue. As the comments say, it's because you need to install CUDNN. For that, there is a guide here.

\n

But as I know already your distro (Ubuntu 20.04) I can give you the command lines already:

\n
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin\nsudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600\nexport last_public_key=3bf863cc # SEE NOTE BELOW\nsudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/${last_public_key}.pub\nsudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"\nsudo apt-get update\nsudo apt-get install libcudnn8\nsudo apt-get install libcudnn8-dev\n
\n

where ${last_public_key} is the last public key (file with .pub extension) published on https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/. (At March 8th 2023 when this post was edit, it was 3bf863cc).

\n

And if you want to install a specific version, the last 2 commands would be replaced with

\n
sudo apt-get install libcudnn8=${cudnn_version}-1+${cuda_version}\nsudo apt-get install libcudnn8-dev=${cudnn_version}-1+${cuda_version}\n
\n

where\n${cudnn_version} is for example 8.2.4.* and ${cuda_version} is for example cuda11.0 (as I see you have 11.0 on the command nvidia-smi, although I have not tested it as mine was 11.4 but I guess it should work Ok)

\n", + "system": "" + }, + { + "instruction": "tensorflow on GPU: no known devices, despite cuda's deviceQuery returning a "PASS" result", + "input": "", + "output": "

From the log output, it looks like you are running the CPU version of TensorFlow (PyPI: tensorflow), and not the GPU version (PyPI: tensorflow-gpu). Running the GPU version would either log information about the CUDA libraries, or an error if it failed to load them or open the driver.

\n\n

If you run the following commands, you should be able to use the GPU in subsequent runs:

\n\n
$ pip uninstall tensorflow\n$ pip install tensorflow-gpu\n
\n", + "system": "" + }, + { + "instruction": "ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory", + "input": "", + "output": "

I downloaded cuda 10.0 from the following link\nCUDA 10.0

\n\n

Then I installed it using the following commands:

\n\n
sudo dpkg -i cuda-repo-ubuntu1804_10.0.130-1_amd64.deb\nsudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub\nsudo apt-get update\nsudo apt-get install cuda-10-0\n
\n\n

I then installed cudnn v7.5.0 for CUDA 10.0 by going to link \nCUDNN download and you need to logon using an account.

\n\n

and after choosing the correct version I downloaded via link CUDNN power link\nafter that I added the include and lib files for cudnn as follows:

\n\n
sudo cp -P cuda/targets/ppc64le-linux/include/cudnn.h /usr/local/cuda-10.0/include/\nsudo cp -P cuda/targets/ppc64le-linux/lib/libcudnn* /usr/local/cuda-10.0/lib64/\nsudo chmod a+r /usr/local/cuda-10.0/lib64/libcudnn*\n
\n\n

After modified the .bashrc for lib and path of cuda 10.0, if you do not have it you need to add them into .bashrc

\n\n
export PATH=/usr/local/cuda-10.0/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}\n
\n\n

And after all these steps, I managed to import tensorflow in python3 successfully.

\n", + "system": "" + }, + { + "instruction": "Tensorflow Allocation Memory: Allocation of 38535168 exceeds 10% of system memory", + "input": "", + "output": "

Try reducing batch_size attribute to a small number(like 1,2 or 3).\nExample:

\n\n
train_generator = data_generator.flow_from_directory(\n    'path_to_the_training_set',\n    target_size = (IMG_SIZE,IMG_SIZE),\n    batch_size = 2,\n    class_mode = 'categorical'\n    )\n
\n", + "system": "" + }, + { + "instruction": "What is a batch in TensorFlow?", + "input": "", + "output": "

Let's say you want to do digit recognition (MNIST) and you have defined your architecture of the network (CNNs). Now, you can start feeding the images from the training data one by one to the network, get the prediction (till this step it's called as doing inference), compute the loss, compute the gradient, and then update the parameters of your network (i.e. weights and biases) and then proceed with the next image ... This way of training the model is sometimes called as online learning.

\n\n

But, you want the training to be faster, the gradients to be less noisy, and also take advantage of the power of GPUs which are efficient at doing array operations (nD-arrays to be specific). So, what you instead do is feed in say 100 images at a time (the choice of this size is up to you (i.e. it's a hyperparameter) and depends on your problem too). For instance, take a look at the below picture, (Author: Martin Gorner)

\n\n

\"Batch

\n\n

Here, since you're feeding in 100 images(28x28) at a time (instead of 1 as in the online training case), the batch size is 100. Oftentimes this is called as mini-batch size or simply mini-batch.

\n\n
\n\n

Also the below picture: (Author: Martin Gorner)

\n\n

\"batch

\n\n

Now, the matrix multiplication will all just work out perfectly fine and you will also be taking advantage of the highly optimized array operations and hence achieve faster training time.

\n\n

If you observe the above picture, it doesn't matter that much whether you give 100 or 256 or 2048 or 10000 (batch size) images as long as it fits in the memory of your (GPU) hardware. You'll simply get that many predictions.

\n\n

But, please keep in mind that this batch size influences the training time, the error that you achieve, the gradient shifts etc., There is no general rule of thumb as to which batch size works out best. Just try a few sizes and pick the one which works best for you. But try not to use large batch sizes since it will overfit the data. People commonly use mini-batch sizes of 32, 64, 128, 256, 512, 1024, 2048.

\n\n
\n\n

Bonus: To get a good grasp of how crazy you can go with this batch size, please give this paper a read: weird trick for parallelizing CNNs

\n", + "system": "" + }, + { + "instruction": "Adjust Single Value within Tensor -- TensorFlow", + "input": "", + "output": "

UPDATE: TensorFlow 1.0 includes a tf.scatter_nd() operator, which can be used to create delta below without creating a tf.SparseTensor.

\n\n
\n\n

This is actually surprisingly tricky with the existing ops! Perhaps somebody can suggest a nicer way to wrap up the following, but here's one way to do it.

\n\n

Let's say you have a tf.constant() tensor:

\n\n
c = tf.constant([[0.0, 0.0, 0.0],\n                 [0.0, 0.0, 0.0],\n                 [0.0, 0.0, 0.0]])\n
\n\n

...and you want to add 1.0 at location [1, 1]. One way you could do this is to define a tf.SparseTensor, delta, representing the change:

\n\n
indices = [[1, 1]]  # A list of coordinates to update.\n\nvalues = [1.0]  # A list of values corresponding to the respective\n                # coordinate in indices.\n\nshape = [3, 3]  # The shape of the corresponding dense tensor, same as `c`.\n\ndelta = tf.SparseTensor(indices, values, shape)\n
\n\n

Then you can use the tf.sparse_tensor_to_dense() op to make a dense tensor from delta and add it to c:

\n\n
result = c + tf.sparse_tensor_to_dense(delta)\n\nsess = tf.Session()\nsess.run(result)\n# ==> array([[ 0.,  0.,  0.],\n#            [ 0.,  1.,  0.],\n#            [ 0.,  0.,  0.]], dtype=float32)\n
\n", + "system": "" + }, + { + "instruction": "What do I need K.clear_session() and del model for (Keras with Tensorflow-gpu)?", + "input": "", + "output": "

K.clear_session() is useful when you're creating multiple models in succession, such as during hyperparameter search or cross-validation. Each model you train adds nodes (potentially numbering in the thousands) to the graph. TensorFlow executes the entire graph whenever you (or Keras) call tf.Session.run() or tf.Tensor.eval(), so your models will become slower and slower to train, and you may also run out of memory. Clearing the session removes all the nodes left over from previous models, freeing memory and preventing slowdown.

\n\n
\n\n

Edit 21/06/19:

\n\n

TensorFlow is lazy-evaluated by default. TensorFlow operations aren't evaluated immediately: creating a tensor or doing some operations to it creates nodes in a dataflow graph. The results are calculated by evaluating the relevant parts of the graph in one go when you call tf.Session.run() or tf.Tensor.eval(). This is so TensorFlow can build an execution plan that allocates operations that can be performed in parallel to different devices. It can also fold adjacent nodes together or remove redundant ones (e.g. if you concatenated two tensors and later split them apart again unchanged). For more details, see https://www.tensorflow.org/guide/graphs

\n\n

All of your TensorFlow models are stored in the graph as a series of tensors and tensor operations. The basic operation of machine learning is tensor dot product - the output of a neural network is the dot product of the input matrix and the network weights. If you have a single-layer perceptron and 1,000 training samples, then each epoch creates at least 1,000 tensor operations. If you have 1,000 epochs, then your graph contains at least 1,000,000 nodes at the end, before taking into account preprocessing, postprocessing, and more complex models such as recurrent nets, encoder-decoder, attentional models, etc.

\n\n

The problem is that eventually the graph would be too large to fit into video memory (6 GB in my case), so TF would shuttle parts of the graph from video to main memory and back. Eventually it would even get too large for main memory (12 GB) and start moving between main memory and the hard disk. Needless to say, this made things incredibly, and increasingly, slow as training went on. Before developing this save-model/clear-session/reload-model flow, I calculated that, at the per-epoch rate of slowdown I experienced, my model would have taken longer than the age of the universe to finish training.

\n\n
\n

Disclaimer: I haven't used TensorFlow in almost a year, so this might have changed. I remember there being quite a few GitHub issues around this so hopefully it has since been fixed.

\n
\n", + "system": "" + }, + { + "instruction": "What are possible values for data_augmentation_options in the TensorFlow Object Detection pipeline configuration?", + "input": "", + "output": "

The list of options is provided in preprocessor.proto:

\n\n
NormalizeImage normalize_image = 1;\nRandomHorizontalFlip random_horizontal_flip = 2;\nRandomPixelValueScale random_pixel_value_scale = 3;\nRandomImageScale random_image_scale = 4;\nRandomRGBtoGray random_rgb_to_gray = 5;\nRandomAdjustBrightness random_adjust_brightness = 6;\nRandomAdjustContrast random_adjust_contrast = 7;\nRandomAdjustHue random_adjust_hue = 8;\nRandomAdjustSaturation random_adjust_saturation = 9;\nRandomDistortColor random_distort_color = 10;\nRandomJitterBoxes random_jitter_boxes = 11;\nRandomCropImage random_crop_image = 12;\nRandomPadImage random_pad_image = 13;\nRandomCropPadImage random_crop_pad_image = 14;\nRandomCropToAspectRatio random_crop_to_aspect_ratio = 15;\nRandomBlackPatches random_black_patches = 16;\nRandomResizeMethod random_resize_method = 17;\nScaleBoxesToPixelCoordinates scale_boxes_to_pixel_coordinates = 18;\nResizeImage resize_image = 19;\nSubtractChannelMean subtract_channel_mean = 20;\nSSDRandomCrop ssd_random_crop = 21;\nSSDRandomCropPad ssd_random_crop_pad = 22;\nSSDRandomCropFixedAspectRatio ssd_random_crop_fixed_aspect_ratio = 23;\n
\n\n

You can see the details about each option in preprocessor.py. Arguments can be provided as key-value pairs.

\n\n
  data_augmentation_options {\n    ssd_random_crop {\n    }\n  }\n  data_augmentation_options {\n    random_pixel_value_scale {\n      minval: 0.6\n    }\n  }\n
\n", + "system": "" + }, + { + "instruction": "How to set layer-wise learning rate in Tensorflow?", + "input": "", + "output": "

It can be achieved quite easily with 2 optimizers:

\n\n
var_list1 = [variables from first 5 layers]\nvar_list2 = [the rest of variables]\ntrain_op1 = GradientDescentOptimizer(0.00001).minimize(loss, var_list=var_list1)\ntrain_op2 = GradientDescentOptimizer(0.0001).minimize(loss, var_list=var_list2)\ntrain_op = tf.group(train_op1, train_op2)\n
\n\n

One disadvantage of this implementation is that it computes tf.gradients(.) twice inside the optimizers and thus it might not be optimal in terms of execution speed. This can be mitigated by explicitly calling tf.gradients(.), splitting the list into 2 and passing corresponding gradients to both optimizers.

\n\n

Related question: Holding variables constant during optimizer

\n\n

EDIT: Added more efficient but longer implementation:

\n\n
var_list1 = [variables from first 5 layers]\nvar_list2 = [the rest of variables]\nopt1 = tf.train.GradientDescentOptimizer(0.00001)\nopt2 = tf.train.GradientDescentOptimizer(0.0001)\ngrads = tf.gradients(loss, var_list1 + var_list2)\ngrads1 = grads[:len(var_list1)]\ngrads2 = grads[len(var_list1):]\ntran_op1 = opt1.apply_gradients(zip(grads1, var_list1))\ntrain_op2 = opt2.apply_gradients(zip(grads2, var_list2))\ntrain_op = tf.group(train_op1, train_op2)\n
\n\n

You can use tf.trainable_variables() to get all training variables and decide to select from them.\nThe difference is that in the first implementation tf.gradients(.) is called twice inside the optimizers. This may cause some redundant operations to be executed (e.g. gradients on the first layer can reuse some computations for the gradients of the following layers).

\n", + "system": "" + }, + { + "instruction": "Tensorflow installation error: not a supported wheel on this platform", + "input": "", + "output": "

I too got the same problem.

\n

I downloaded get-pip.py from https://bootstrap.pypa.io/get-pip.py and then ran python2.7 get-pip.py for installing pip2.7.

\n

And then ran the pip install command with python2.7 as follows.

\n

For Ubuntu/Linux:

\n
python2.7 -m pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl\n
\n

For Mac OS X:

\n
python2.7 -m pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl\n
\n

This should work just fine as it did for me :)

\n

I followed these instructions from here.

\n", + "system": "" + }, + { + "instruction": "Is Tensorflow compatible with a Windows workflow?", + "input": "", + "output": "

Updated 11/28/2016: Today we released the first release candidate of TensorFlow 0.12, which includes support for Windows. You can install the Python bindings using the following command in a Python shell:

\n\n
C:\\> pip install tensorflow\n
\n\n

...or, if you want GPU support:

\n\n
C:\\> pip install tensorflow-gpu\n
\n\n

You can also build TensorFlow yourself using Microsoft Visual C++ and NVCC (for the CUDA parts). The easiest way to build on Windows is currently to use the CMake build, and we will soon provide support for Bazel on Windows.

\n\n
\n\n

Previous answer: We haven't tried to build TensorFlow on Windows so far: the only supported platforms are Linux (Ubuntu) and Mac OS X, and we've only built binaries for those platforms.

\n\n

For now, on Windows, the easiest way to get started with TensorFlow would be to use Docker: http://tensorflow.org/get_started/os_setup.md#docker-based_installation

\n\n

It should become easier to add Windows support when Bazel (the build system we are using) adds support for building on Windows, which is on the roadmap for Bazel 0.3. You can see the full Bazel roadmap here.

\n\n

In the meantime, you can follow issue 17 on the TensorFlow GitHub page.

\n", + "system": "" + }, + { + "instruction": "What's the difference between a Tensorflow Keras Model and Estimator?", + "input": "", + "output": "

As @jaromir pointed out - estimators are deprecated and unavailable from Tensorflow 2.16. Use the Keras APIs instead. From the documentation:

\n
\n

Warning: TensorFlow 2.15 included the final release of the tf-estimator package. Estimators will not be available in TensorFlow\n2.16 or after. See the migration guide for more information about how to convert off of Estimators.

\n
\n

Below is the original answer from 2018.

\n
\n

Background

\n

The Estimators API was added to Tensorflow in Release 1.1, and provides a high-level abstraction over lower-level Tensorflow core operations. It works with an Estimator instance, which is TensorFlow's high-level representation of a complete model.

\n

\"\"

\n

Keras is similar to the Estimators API in that it abstracts deep learning model components such as layers, activation functions and optimizers, to make it easier for developers. It is a model-level library, and does not handle low-level operations, which is the job of tensor manipulation libraries, or backends. Keras supports three backends - Tensorflow, Theano and CNTK.

\n

Keras was not part of Tensorflow until Release 1.4.0 (2 Nov 2017). Now, when you use tf.keras (or talk about 'Tensorflow Keras'), you are simply using the Keras interface with the Tensorflow backend to build and train your model.

\n

\"\"

\n

So both the Estimator API and Keras API provides a high-level API over low-level core Tensorflow API, and you can use either to train your model. But in most cases, if you are working with Tensorflow, you'd want to use the Estimators API for the reasons listed below.

\n

Distribution

\n

You can conduct distributed training across multiple servers with the Estimators API, but not with Keras API.

\n

From the Tensorflow Keras Guide, it says that:

\n
\n

The Estimators API is used for training models for distributed environments.

\n
\n

And from the Tensorflow Estimators Guide, it says that:

\n
\n

You can run Estimator-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model.

\n
\n

Pre-made Estimator

\n

Whilst Keras provides abstractions that makes building your models easier, you still have to write code to build your model. With Estimators, Tensorflow provides Pre-made Estimators, which are models which you can use straight away, simply by plugging in the hyperparameters.

\n

Pre-made Estimators are similar to how you'd work with scikit-learn. For example, the tf.estimator.LinearRegressor from Tensorflow is similar to the sklearn.linear_model.LinearRegression from scikit-learn.

\n

Integration with Other Tensorflow Tools

\n

Tensorflow provides a vistualzation tool called TensorBoard that helps you visualize your graph and statistics. By using an Estimator, you can easily save summaries to be visualized with Tensorboard.

\n

Converting Keras Model to Estimator

\n

To migrate a Keras model to an Estimator, use the tf.keras.estimator.model_to_estimator method.

\n", + "system": "" + }, + { + "instruction": "MemoryError in TensorFlow; and "successful NUMA node read from SysFS had negative value (-1)" with xen", + "input": "", + "output": "

There is the code which prints the message \"successful NUMA node read from SysFS had negative value (-1)\", and it is not Fatal Error, it is just warning. Real error is MemoryError in your File \"model_new.py\", line 85, in <module>. We need more sources to check this error. Try to make your model smaller or run on server with more RAM.

\n\n
\n\n

About NUMA node warning:

\n\n

https://github.com/tensorflow/tensorflow/blob/e4296aefff97e6edd3d7cee9a09b9dd77da4c034/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc#L855

\n\n
// Attempts to read the NUMA node corresponding to the GPU device's PCI bus out\n// of SysFS. Returns -1 if it cannot...\nstatic int TryToReadNumaNode(const string &pci_bus_id, int device_ordinal) \n{...\n  string filename =\n      port::Printf(\"/sys/bus/pci/devices/%s/numa_node\", pci_bus_id.c_str());\n  FILE *file = fopen(filename.c_str(), \"r\");\n  if (file == nullptr) {\n    LOG(ERROR) << \"could not open file to read NUMA node: \" << filename\n               << \"\\nYour kernel may have been built without NUMA support.\";\n    return kUnknownNumaNode;\n  } ...\n  if (port::safe_strto32(content, &value)) {\n    if (value < 0) {  // See http://b/18228951 for details on this path.\n      LOG(INFO) << \"successful NUMA node read from SysFS had negative value (\"\n                << value << \"), but there must be at least one NUMA node\"\n                            \", so returning NUMA node zero\";\n      fclose(file);\n      return 0;\n    }\n
\n\n

TensorFlow was able to open /sys/bus/pci/devices/%s/numa_node file where %s is id of GPU PCI card (string pci_bus_id = CUDADriver::GetPCIBusID(device_)). Your PC is not multisocket, there is only single CPU socket with 8-core Xeon E5-2670 installed, so this id should be '0' (single NUMA node is numbered as 0 in Linux), but the error message says that it was -1 value in this file!

\n\n

So, we know that sysfs is mounted into /sys, there is numa_node special file, CONFIG_NUMA is enabled in your Linux Kernel config (zgrep NUMA /boot/config* /proc/config*). Actually it is enabled: CONFIG_NUMA=y - in the deb of your x86_64 4.4.0-78-generic kernel

\n\n

The special file numa_node is documented in https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-bus-pci (is the ACPI of your PC wrong?)

\n\n
What:       /sys/bus/pci/devices/.../numa_node\nDate:       Oct 2014\nContact:    Prarit Bhargava <prarit@redhat.com>\nDescription:\n        This file contains the NUMA node to which the PCI device is\n        attached, or -1 if the node is unknown.  The initial value\n        comes from an ACPI _PXM method or a similar firmware\n        source.  If that is missing or incorrect, this file can be\n        written to override the node.  In that case, please report\n        a firmware bug to the system vendor.  Writing to this file\n        taints the kernel with TAINT_FIRMWARE_WORKAROUND, which\n        reduces the supportability of your system.\n
\n\n

There is quick (kludge) workaround for this error: find the numa_node of your GPU and with root account do after every boot this command where NNNNN is the PCI id of your card (search in lspci output and in /sys/bus/pci/devices/ directory)

\n\n
echo 0 | sudo tee -a /sys/bus/pci/devices/NNNNN/numa_node\n
\n\n

Or just echo it into every such file, it should be rather safe:

\n\n
for a in /sys/bus/pci/devices/*; do echo 0 | sudo tee -a $a/numa_node; done\n
\n\n

Also your lshw shows that it is not PC, but Xen virtual guest. There is something wrong between Xen platform (ACPI) emulation and Linux PCI bus NUMA-support code.

\n", + "system": "" + }, + { + "instruction": "What is num_units in tensorflow BasicLSTMCell?", + "input": "", + "output": "

From this brilliant article

\n
\n

num_units can be interpreted as the analogy of hidden layer from the feed forward neural network. The number of nodes in hidden layer of a feed forward neural network is equivalent to num_units number of LSTM units in a LSTM cell at every time step of the network.

\n
\n

See the image there too!

\n

\"enter

\n", + "system": "" + }, + { + "instruction": "How to fix "AttributeError: module 'tensorflow' has no attribute 'get_default_graph'"?", + "input": "", + "output": "

Please try:

\n\n

from tensorflow.keras.models import Sequential

\n\n

instead of

\n\n

from keras.models import Sequential

\n", + "system": "" + }, + { + "instruction": "Tensorflow r1.0 : could not a find a version that satisfies the requirement tensorflow", + "input": "", + "output": "

I was in same problem.

\n\n

Below command solved my problem

\n\n
pip3 install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0-py3-none-any.whl\n
\n\n

to find the list of all the urls based on the python version and CPU or GPU only refer to:\nhttps://www.tensorflow.org/install/pip

\n", + "system": "" + }, + { + "instruction": "How does one debug NaN values in TensorFlow?", + "input": "", + "output": "

There are a couple of reasons WHY you can get a NaN-result, often it is because of too high a learning rate but plenty other reasons are possible like for example corrupt data in your input-queue or a log of 0 calculation.

\n\n

Anyhow, debugging with a print as you describe cannot be done by a simple print (as this would result only in the printing of the tensor-information inside the graph and not print any actual values).

\n\n

However, if you use tf.print as an op in bulding the graph (tf.print) then when the graph gets executed you will get the actual values printed (and it IS a good exercise to watch these values to debug and understand the behavior of your net).

\n\n

However, you are using the print-statement not entirely in the correct manner. This is an op, so you need to pass it a tensor and request a result-tensor that you need to work with later on in the executing graph. Otherwise the op is not going to be executed and no printing occurs. Try this:

\n\n
Z = tf.sqrt(Delta_tilde)\nZ = tf.Print(Z,[Z], message=\"my Z-values:\") # <-------- TF PRINT STATMENT\nZ = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)\nZ = tf.pow(Z, 2.0)\n
\n", + "system": "" + }, + { + "instruction": "TensorBoard - Plot training and validation losses on the same graph?", + "input": "", + "output": "

The work-around I have been doing is to use two SummaryWriter with different log dir for training set and cross-validation set respectively. And you will see something like this:

\n\n

\"enter

\n", + "system": "" + }, + { + "instruction": "Loss function for class imbalanced binary classifier in Tensor flow", + "input": "", + "output": "

You can add class weights to the loss function, by multiplying logits. \nRegular cross entropy loss is this:

\n\n
loss(x, class) = -log(exp(x[class]) / (\\sum_j exp(x[j])))\n               = -x[class] + log(\\sum_j exp(x[j]))\n
\n\n

in weighted case:

\n\n
loss(x, class) = weights[class] * -x[class] + log(\\sum_j exp(weights[class] * x[j]))\n
\n\n

So by multiplying logits, you are re-scaling predictions of each class by its class weight.

\n\n

For example:

\n\n
ratio = 31.0 / (500.0 + 31.0)\nclass_weight = tf.constant([ratio, 1.0 - ratio])\nlogits = ... # shape [batch_size, 2]\nweighted_logits = tf.mul(logits, class_weight) # shape [batch_size, 2]\nxent = tf.nn.softmax_cross_entropy_with_logits(\n  weighted_logits, labels, name=\"xent_raw\")\n
\n\n

There is a standard losses function now that supports weights per batch:

\n\n
tf.losses.sparse_softmax_cross_entropy(labels=label, logits=logits, weights=weights)\n
\n\n

Where weights should be transformed from class weights to a weight per example (with shape [batch_size]). See documentation here.

\n", + "system": "" + }, + { + "instruction": "Tensorflow._api.v2.train has no attribute 'AdamOptimizer'", + "input": "", + "output": "
tf.train.AdamOptimizer() => tf.optimizers.Adam()\n
\n\n

From https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers

\n", + "system": "" + }, + { + "instruction": "Tensorflow Compile Runs For A Long Time", + "input": "", + "output": "

Unfortunately, some programs can take a long time to compile. A couple of hours of compilation is not strange for tensorflow on your setup.

\n

There are reports of it taking 50 minutes on a considerably faster machine

\n

A solution to this problem is to use pre-compiled binaries that are available with pip, instructions can be found here: https://www.tensorflow.org/install/pip.html

\n

Basically you can do this:

\n
pip install tensorflow\n
\n

If you require a specific older version, like 1.15, you can do this:

\n
pip install tensorflow==1.15\n
\n

For gpu support you add [and-cuda] to the package name, like this:

\n
pip install tensorflow[and-cuda]\n
\n

And:

\n
pip install tensorflow[and-cuda]==1.15\n
\n", + "system": "" + }, + { + "instruction": "What is the difference between a sigmoid followed by the cross entropy and sigmoid_cross_entropy_with_logits in TensorFlow?", + "input": "", + "output": "

You're confusing the cross-entropy for binary and multi-class problems.

\n\n

Multi-class cross-entropy

\n\n

The formula that you use is correct and it directly corresponds to tf.nn.softmax_cross_entropy_with_logits:

\n\n\n\n
-tf.reduce_sum(p * tf.log(q), axis=1)\n
\n\n

p and q are expected to be probability distributions over N classes. In particular, N can be 2, as in the following example:

\n\n
p = tf.placeholder(tf.float32, shape=[None, 2])\nlogit_q = tf.placeholder(tf.float32, shape=[None, 2])\nq = tf.nn.softmax(logit_q)\n\nfeed_dict = {\n  p: [[0, 1],\n      [1, 0],\n      [1, 0]],\n  logit_q: [[0.2, 0.8],\n            [0.7, 0.3],\n            [0.5, 0.5]]\n}\n\nprob1 = -tf.reduce_sum(p * tf.log(q), axis=1)\nprob2 = tf.nn.softmax_cross_entropy_with_logits(labels=p, logits=logit_q)\nprint(prob1.eval(feed_dict))  # [ 0.43748799  0.51301527  0.69314718]\nprint(prob2.eval(feed_dict))  # [ 0.43748799  0.51301527  0.69314718]\n
\n\n

Note that q is computing tf.nn.softmax, i.e. outputs a probability distribution. So it's still multi-class cross-entropy formula, only for N = 2.

\n\n

Binary cross-entropy

\n\n

This time the correct formula is

\n\n
p * -tf.log(q) + (1 - p) * -tf.log(1 - q)\n
\n\n

Though mathematically it's a partial case of the multi-class case, the meaning of p and q is different. In the simplest case, each p and q is a number, corresponding to a probability of the class A.

\n\n

Important: Don't get confused by the common p * -tf.log(q) part and the sum. Previous p was a one-hot vector, now it's a number, zero or one. Same for q - it was a probability distribution, now's it's a number (probability).

\n\n

If p is a vector, each individual component is considered an independent binary classification. See this answer that outlines the difference between softmax and sigmoid functions in tensorflow. So the definition p = [0, 0, 0, 1, 0] doesn't mean a one-hot vector, but 5 different features, 4 of which are off and 1 is on. The definition q = [0.2, 0.2, 0.2, 0.2, 0.2] means that each of 5 features is on with 20% probability.

\n\n

This explains the use of sigmoid function before the cross-entropy: its goal is to squash the logit to [0, 1] interval.

\n\n

The formula above still holds for multiple independent features, and that's exactly what tf.nn.sigmoid_cross_entropy_with_logits computes:

\n\n
p = tf.placeholder(tf.float32, shape=[None, 5])\nlogit_q = tf.placeholder(tf.float32, shape=[None, 5])\nq = tf.nn.sigmoid(logit_q)\n\nfeed_dict = {\n  p: [[0, 0, 0, 1, 0],\n      [1, 0, 0, 0, 0]],\n  logit_q: [[0.2, 0.2, 0.2, 0.2, 0.2],\n            [0.3, 0.3, 0.2, 0.1, 0.1]]\n}\n\nprob1 = -p * tf.log(q)\nprob2 = p * -tf.log(q) + (1 - p) * -tf.log(1 - q)\nprob3 = p * -tf.log(tf.sigmoid(logit_q)) + (1-p) * -tf.log(1-tf.sigmoid(logit_q))\nprob4 = tf.nn.sigmoid_cross_entropy_with_logits(labels=p, logits=logit_q)\nprint(prob1.eval(feed_dict))\nprint(prob2.eval(feed_dict))\nprint(prob3.eval(feed_dict))\nprint(prob4.eval(feed_dict))\n
\n\n

You should see that the last three tensors are equal, while the prob1 is only a part of cross-entropy, so it contains correct value only when p is 1:

\n\n
[[ 0.          0.          0.          0.59813893  0.        ]\n [ 0.55435514  0.          0.          0.          0.        ]]\n[[ 0.79813886  0.79813886  0.79813886  0.59813887  0.79813886]\n [ 0.5543552   0.85435522  0.79813886  0.74439669  0.74439669]]\n[[ 0.7981388   0.7981388   0.7981388   0.59813893  0.7981388 ]\n [ 0.55435514  0.85435534  0.7981388   0.74439663  0.74439663]]\n[[ 0.7981388   0.7981388   0.7981388   0.59813893  0.7981388 ]\n [ 0.55435514  0.85435534  0.7981388   0.74439663  0.74439663]]\n
\n\n

Now it should be clear that taking a sum of -p * tf.log(q) along axis=1 doesn't make sense in this setting, though it'd be a valid formula in multi-class case.

\n", + "system": "" + }, + { + "instruction": "Reset weights in Keras layer", + "input": "", + "output": "

Save the initial weights right after compiling the model but before training it:

\n\n
model.save_weights('model.h5')\n
\n\n

and then after training, \"reset\" the model by reloading the initial weights:

\n\n
model.load_weights('model.h5')\n
\n\n

This gives you an apples to apples model to compare different data sets and should be quicker than recompiling the entire model.

\n", + "system": "" + }, + { + "instruction": "How to make a custom activation function with only Python in Tensorflow?", + "input": "", + "output": "

Yes There is!

\n

Credit:\nIt was hard to find the information and get it working but here is an example copying from the principles and code found here and here.

\n

Requirements:\nBefore we start, there are two requirement for this to be able to succeed. First you need to be able to write your activation as a function on numpy arrays. Second you have to be able to write the derivative of that function either as a function in Tensorflow (easier) or in the worst case scenario as a function on numpy arrays.

\n

Writing Activation function:

\n

So let's take for example this function which we would want to use an activation function:

\n
def spiky(x):\n    r = x % 1\n    if r <= 0.5:\n        return r\n    else:\n        return 0\n
\n

Which look as follows:\n\"Spiky

\n

The first step is making it into a numpy function, this is easy:

\n
import numpy as np\nnp_spiky = np.vectorize(spiky)\n
\n

Now we should write its derivative.

\n

Gradient of Activation:\nIn our case it is easy, it is 1 if x mod 1 < 0.5 and 0 otherwise. So:

\n
def d_spiky(x):\n    r = x % 1\n    if r <= 0.5:\n        return 1\n    else:\n        return 0\nnp_d_spiky = np.vectorize(d_spiky)\n
\n

Now for the hard part of making a TensorFlow function out of it.

\n

Making a numpy fct to a tensorflow fct:\nWe will start by making np_d_spiky into a tensorflow function. There is a function in tensorflow tf.py_func(func, inp, Tout, stateful=stateful, name=name) [doc] which transforms any numpy function to a tensorflow function, so we can use it:

\n
import tensorflow as tf\nfrom tensorflow.python.framework import ops\n\nnp_d_spiky_32 = lambda x: np_d_spiky(x).astype(np.float32)\n\n\ndef tf_d_spiky(x,name=None):\n    with tf.name_scope(name, "d_spiky", [x]) as name:\n        y = tf.py_func(np_d_spiky_32,\n                        [x],\n                        [tf.float32],\n                        name=name,\n                        stateful=False)\n        return y[0]\n
\n

tf.py_func acts on lists of tensors (and returns a list of tensors), that is why we have [x] (and return y[0]). The stateful option is to tell tensorflow whether the function always gives the same output for the same input (stateful = False) in which case tensorflow can simply the tensorflow graph, this is our case and will probably be the case in most situations. One thing to be careful of at this point is that numpy used float64 but tensorflow uses float32 so you need to convert your function to use float32 before you can convert it to a tensorflow function otherwise tensorflow will complain. This is why we need to make np_d_spiky_32 first.

\n

What about the Gradients? The problem with only doing the above is that even though we now have tf_d_spiky which is the tensorflow version of np_d_spiky, we couldn't use it as an activation function if we wanted to because tensorflow doesn't know how to calculate the gradients of that function.

\n

Hack to get Gradients: As explained in the sources mentioned above, there is a hack to define gradients of a function using tf.RegisterGradient [doc] and tf.Graph.gradient_override_map [doc]. Copying the code from harpone we can modify the tf.py_func function to make it define the gradient at the same time:

\n
def py_func(func, inp, Tout, stateful=True, name=None, grad=None):\n    \n    # Need to generate a unique name to avoid duplicates:\n    rnd_name = 'PyFuncGrad' + str(np.random.randint(0, 1E+8))\n    \n    tf.RegisterGradient(rnd_name)(grad)  # see _MySquareGrad for grad example\n    g = tf.get_default_graph()\n    with g.gradient_override_map({"PyFunc": rnd_name}):\n        return tf.py_func(func, inp, Tout, stateful=stateful, name=name)\n
\n

Now we are almost done, the only thing is that the grad function we need to pass to the above py_func function needs to take a special form. It needs to take in an operation, and the previous gradients before the operation and propagate the gradients backward after the operation.

\n

Gradient Function: So for our spiky activation function that is how we would do it:

\n
def spikygrad(op, grad):\n    x = op.inputs[0]\n\n    n_gr = tf_d_spiky(x)\n    return grad * n_gr  \n
\n

The activation function has only one input, that is why x = op.inputs[0]. If the operation had many inputs, we would need to return a tuple, one gradient for each input. For example if the operation was a-bthe gradient with respect to a is +1 and with respect to b is -1 so we would have return +1*grad,-1*grad. Notice that we need to return tensorflow functions of the input, that is why need tf_d_spiky, np_d_spiky would not have worked because it cannot act on tensorflow tensors. Alternatively we could have written the derivative using tensorflow functions:

\n
def spikygrad2(op, grad):\n    x = op.inputs[0]\n    r = tf.mod(x,1)\n    n_gr = tf.to_float(tf.less_equal(r, 0.5))\n    return grad * n_gr  \n
\n

Combining it all together: Now that we have all the pieces, we can combine them all together:

\n
np_spiky_32 = lambda x: np_spiky(x).astype(np.float32)\n\ndef tf_spiky(x, name=None):\n    \n    with tf.name_scope(name, "spiky", [x]) as name:\n        y = py_func(np_spiky_32,\n                        [x],\n                        [tf.float32],\n                        name=name,\n                        grad=spikygrad)  # <-- here's the call to the gradient\n        return y[0]\n
\n

And now we are done. And we can test it.

\n

Test:

\n
with tf.Session() as sess:\n\n    x = tf.constant([0.2,0.7,1.2,1.7])\n    y = tf_spiky(x)\n    tf.initialize_all_variables().run()\n    \n    print(x.eval(), y.eval(), tf.gradients(y, [x])[0].eval())\n
\n
\n

[ 0.2 0.69999999 1.20000005 1.70000005] [ 0.2 0. 0.20000005 0.] [ 1. 0. 1. 0.]

\n
\n

Success!

\n", + "system": "" + }, + { + "instruction": "Cuda 12 + tf-nightly 2.12: Could not find cuda drivers on your machine, GPU will not be used, while every checking is fine and in torch it works", + "input": "", + "output": "

I think that, as of March 2023, the only tensorflow distribution for cuda 12 is the docker package from NVIDIA.

\n

A tf package for cuda 12 should show the following info

\n
>>> tf.sysconfig.get_build_info() \nOrderedDict([('cpu_compiler', '/usr/bin/x86_64-linux-gnu-gcc-11'), \n('cuda_compute_capabilities', ['compute_86']), \n('cuda_version', '12.0'), ('cudnn_version', '8'), \n('is_cuda_build', True), ('is_rocm_build', False), ('is_tensorrt_build', True)])\n
\n

But if we run tf.sysconfig.get_build_info() on any tensorflow package installed via pip, it stills tells that cuda_version is 11.x

\n

So your alternatives are:

\n\n", + "system": "" + }, + { + "instruction": "Keras: change learning rate", + "input": "", + "output": "

You can change the learning rate as follows:

\n\n
from keras import backend as K\nK.set_value(model.optimizer.learning_rate, 0.001)\n
\n\n

Included into your complete example it looks as follows:

\n\n
from keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras import backend as K\nimport keras\nimport numpy as np\n\nmodel = Sequential()\n\nmodel.add(Dense(1, input_shape=(10,)))\n\noptimizer = keras.optimizers.Adam(lr=0.01)\nmodel.compile(loss='mse', optimizer=optimizer)\n\nprint(\"Learning rate before first fit:\", model.optimizer.learning_rate.numpy())\n\nmodel.fit(np.random.randn(50,10), np.random.randn(50), epochs=50, verbose=0)\n\n# Change learning rate to 0.001 and train for 50 more epochs\nK.set_value(model.optimizer.learning_rate, 0.001)\nprint(\"Learning rate before second fit:\", model.optimizer.learning_rate.numpy())\n\nmodel.fit(np.random.randn(50,10), \n          np.random.randn(50), \n          initial_epoch=50, \n          epochs=50,\n          verbose=0)\n
\n\n

I've just tested this with keras 2.3.1. Not sure why the approach didn't seem to work for you.

\n", + "system": "" + }, + { + "instruction": "How to extract data/labels back from TensorFlow dataset", + "input": "", + "output": "

In case your tf.data.Dataset is batched, the following code will retrieve all the y labels:

\n
y = np.concatenate([y for x, y in ds], axis=0)\n
\n

Quick explanation: [y for x, y in ds] is known as \u201clist comprehension\u201d in python. If dataset is batched, this expression will loop thru each batch and put each batch y (a TF 1D tensor) in the list, and return it. Then, np.concatenate will take this list of 1-D tensor (implicitly casting to numpy) and stack it in the 0-axis to produce a single long vector. In summary, it is just converting a bunch of 1-d little vector into one long vector.

\n

Note: if your y is more complex, this answer will need some minor modification.

\n", + "system": "" + }, + { + "instruction": "Tensorflow estimator ValueError: logits and labels must have the same shape ((?, 1) vs (?,))", + "input": "", + "output": "

You should reshape your labels as 2d-tensor (the first dimension will be the batch dimension and the second the scalar label):

\n\n
# Our vectorized labels\ny_train = np.asarray(train_labels).astype('float32').reshape((-1,1))\ny_test = np.asarray(test_labels).astype('float32').reshape((-1,1))\n
\n", + "system": "" + }, + { + "instruction": "How to Properly Combine TensorFlow's Dataset API and Keras?", + "input": "", + "output": "

Update June 09, 2018

\n\n\n\n\n\n
# Load mnist training data\n(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()\ntraining_set = tfdata_generator(x_train, y_train,is_training=True)\n\nmodel = # your keras model here              \nmodel.fit(\n    training_set.make_one_shot_iterator(),\n    steps_per_epoch=len(x_train) // 128,\n    epochs=5,\n    verbose = 1)\n
\n\n\n\n\n\n
def tfdata_generator(images, labels, is_training, batch_size=128):\n  '''Construct a data generator using `tf.Dataset`. '''\n\n  def map_fn(image, label):\n      '''Preprocess raw data to trainable input. '''\n    x = tf.reshape(tf.cast(image, tf.float32), (28, 28, 1))\n    y = tf.one_hot(tf.cast(label, tf.uint8), _NUM_CLASSES)\n    return x, y\n\n  dataset = tf.data.Dataset.from_tensor_slices((images, labels))\n\n  if is_training:\n    dataset = dataset.shuffle(1000)  # depends on sample size\n  dataset = dataset.map(map_fn)\n  dataset = dataset.batch(batch_size)\n  dataset = dataset.repeat()\n  dataset = dataset.prefetch(tf.contrib.data.AUTOTUNE)\n\n  return dataset\n
\n\n

Old Solution:

\n\n

In addition to @Yu-Yang's answer, you can also modify tf.data.Dataset to become a generator for fit_generator as following

\n\n
from tensorflow.contrib.learn.python.learn.datasets import mnist\n\ndata   = mnist.load_mnist()\nmodel  = # your Keras model\nmodel.fit_generator(generator = tfdata_generator(data.train.images, data.train.labels),\n                    steps_per_epoch=200,\n                    workers = 0 , # This is important\n                    verbose = 1)\n\n\ndef tfdata_generator(images, labels, batch_size=128, shuffle=True,):\n    def map_func(image, label):\n        '''A transformation function'''\n        x_train = tf.reshape(tf.cast(image, tf.float32), image_shape)\n        y_train = tf.one_hot(tf.cast(label, tf.uint8), num_classes)\n        return [x_train, y_train]\n\n    dataset  = tf.data.Dataset.from_tensor_slices((images, labels))\n    dataset  = dataset.map(map_func)\n    dataset  = dataset.shuffle().batch(batch_size).repeat()\n    iterator = dataset.make_one_shot_iterator()\n\n    next_batch = iterator.get_next()\n    while True:\n        yield K.get_session().run(next_batch)\n
\n", + "system": "" + }, + { + "instruction": "Understanding the ResourceExhaustedError: OOM when allocating tensor with shape", + "input": "", + "output": "

Let's divide the issues one by one:

\n\n

About tensorflow to allocate all memory in advance, you can use following code snippet to let tensorflow allocate memory whenever it is needed. So that you can understand how the things are going.

\n\n
gpu_options = tf.GPUOptions(allow_growth=True)\nsession = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options))\n
\n\n

This works equally with tf.Session() instead of tf.InteractiveSession() if you prefer.

\n\n

Second thing about the sizes, \nAs there is no information about your network size, we cannot estimate what is going wrong. However, you can alternatively debug step by step all the network. For example, create a network only with one layer, get its output, create session and feed values once and visualize how much memory you consume. Iterate this debugging session until you see the point where you are going out of memory.

\n\n

Please be aware that 3840 x 155229 output is really, REALLY a big output. It means ~600M neurons, and ~2.22GB per one layer only. If you have any similar size layers, all of them will add up to fill your GPU memory pretty fast.

\n\n

Also, this is only for forward direction, if you are using this layer for training, the back propagation and layers added by optimizer will multiply this size by 2. So, for training you consume ~5 GB just for output layer.

\n\n

I suggest you to revise your network and try to reduce batch size / parameter counts to fit your model to GPU

\n", + "system": "" + }, + { + "instruction": "What is the purpose of tf.global_variables_initializer?", + "input": "", + "output": "

A more complete description is given here.

\n\n

Only after running tf.global_variables_initializer() in a session will your variables hold the values you told them to hold when you declare them (tf.Variable(tf.zeros(...)), tf.Variable(tf.random_normal(...)),...).

\n\n

From the TF doc :

\n\n
\n

Calling tf.Variable() adds several ops to the graph:

\n \n \n
\n\n

And also:

\n\n
\n

Variable initializers must be run explicitly before other ops in your\n model can be run. The easiest way to do that is to add an op that runs\n all the variable initializers, and run that op before using the model.

\n
\n", + "system": "" + }, + { + "instruction": ""freeze" some variables/scopes in tensorflow: stop_gradient vs passing variables to minimize", + "input": "", + "output": "

The easiest way to achieve this, as you mention in your question, is to create two optimizer operations using separate calls to opt.minimize(cost, ...). By default, the optimizer will use all of the variables in tf.trainable_variables(). If you want to filter the variables to a particular scope, you can use the optional scope argument to tf.get_collection() as follows:

\n\n
optimizer = tf.train.AdagradOptimzer(0.01)\n\nfirst_train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,\n                                     \"scope/prefix/for/first/vars\")\nfirst_train_op = optimizer.minimize(cost, var_list=first_train_vars)\n\nsecond_train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,\n                                      \"scope/prefix/for/second/vars\")                     \nsecond_train_op = optimizer.minimize(cost, var_list=second_train_vars)\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow python : Accessing individual elements in a tensor", + "input": "", + "output": "

There are two main ways to access subsets of the elements in a tensor, either of which should work for your example.

\n\n
    \n
  1. Use the indexing operator (based on tf.slice()) to extract a contiguous slice from the tensor.

    \n\n
    input = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])\n\noutput = input[0, :]\nprint sess.run(output)  # ==> [1 2 3]\n
    \n\n

    The indexing operator supports many of the same slice specifications as NumPy does.

  2. \n
  3. Use the tf.gather() op to select a non-contiguous slice from the tensor.

    \n\n
    input = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])\n\noutput = tf.gather(input, 0)\nprint sess.run(output)  # ==> [1 2 3]\n\noutput = tf.gather(input, [0, 2])\nprint sess.run(output)  # ==> [[1 2 3] [7 8 9]]\n
    \n\n

    Note that tf.gather() only allows you to select whole slices in the 0th dimension (whole rows in the example of a matrix), so you may need to tf.reshape() or tf.transpose() your input to obtain the appropriate elements.

  4. \n
\n", + "system": "" + }, + { + "instruction": "Difference between Keras model.save() and model.save_weights()?", + "input": "", + "output": "

save() saves the weights and the model structure to a single HDF5 file. I believe it also includes things like the optimizer state. Then you can use that HDF5 file with load() to reconstruct the whole model, including weights.

\n\n

save_weights() only saves the weights to HDF5 and nothing else. You need extra code to reconstruct the model from a JSON file.

\n", + "system": "" + }, + { + "instruction": "NotImplementedError: Layers with arguments in `__init__` must override `get_config`", + "input": "", + "output": "

It's not a bug, it's a feature.

\n\n

This error lets you know that TF can't save your model, because it won't be able to load it.
\nSpecifically, it won't be able to reinstantiate your custom Layer classes: encoder and decoder.

\n\n

To solve this, just override their get_config method according to the new arguments you've added.

\n\n
\n

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

\n
\n\n
\n\n

For example, if your encoder class looks something like this:

\n\n
class encoder(tf.keras.layers.Layer):\n\n    def __init__(\n        self,\n        vocab_size, num_layers, units, d_model, num_heads, dropout,\n        **kwargs,\n    ):\n        super().__init__(**kwargs)\n        self.vocab_size = vocab_size\n        self.num_layers = num_layers\n        self.units = units\n        self.d_model = d_model\n        self.num_heads = num_heads\n        self.dropout = dropout\n\n    # Other methods etc.\n
\n\n

then you only need to override this method:

\n\n
    def get_config(self):\n\n        config = super().get_config().copy()\n        config.update({\n            'vocab_size': self.vocab_size,\n            'num_layers': self.num_layers,\n            'units': self.units,\n            'd_model': self.d_model,\n            'num_heads': self.num_heads,\n            'dropout': self.dropout,\n        })\n        return config\n
\n\n

When TF sees this (for both classes), you will be able to save the model.

\n\n

Because now when the model is loaded, TF will be able to reinstantiate the same layer from config.

\n\n
\n\n

Layer.from_config's source code may give a better sense of how it works:

\n\n
@classmethod\ndef from_config(cls, config):\n  return cls(**config)\n
\n", + "system": "" + }, + { + "instruction": "AttributeError: module 'tensorflow' has no attribute 'app'", + "input": "", + "output": "

try using import tensorflow.compat.v1 as tf

\n", + "system": "" + }, + { + "instruction": "UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually", + "input": "", + "output": "

Intro

\n\n

I'd like to add to olejorgenb's answer - for a specific scenario, where you don't want to train the model, just use it (e.g. in production).

\n\n

\"Compile\" means \"prepare for training\", which includes mainly setting up the optimizer. It could also have been saved before, and then you can continue the \"same\" training after loading the saved model.

\n\n

The fix

\n\n

But, what about the scenario - I want to just run the model? Well, use the compile=False argument to load_model like that:

\n\n
trained_model = load_model('model.h5', compile=False)\n
\n\n

You won't be able to .fit() this model without using trained_model.compile(...) first, but most importantly - the warning will go away.

\n\n

Misc Notes

\n\n

Btw, in my Keras version, the argument include_optimizer has a default of True. This should work also for trainig callbacks like Checkpoint. This means, when loading a model saved by Keras, you can usually count on the optimizer being included (except for the situation: see Hull Gasper's answer).

\n\n

But, when you have a model which was not trained by Keras (e.g. when converting a model trained by Darknet), the model is saved un-compiled. This produces the warning, and you can get rid of it in the way described above.

\n", + "system": "" + }, + { + "instruction": "ImportError: Could not import the Python Imaging Library (PIL) required to load image files on tensorflow", + "input": "", + "output": "

pip install pillow

\n\n

Then replace \nfrom IPython.display import display, Image\nwith\nfrom IPython.display import display\nfrom PIL import Image

\n", + "system": "" + }, + { + "instruction": "gensim Doc2Vec vs tensorflow Doc2Vec", + "input": "", + "output": "

Old question, but an answer would be useful for future visitors. So here are some of my thoughts.

\n\n

There are some problems in the tensorflow implementation:

\n\n\n\n

So for the OP's listed questions:

\n\n
    \n
  1. This implementation of doc2vec in tensorflow is working and correct in its own way, but it is different from both the gensim implementation and the paper.
  2. \n
  3. window is 1-side size as said above. If document size is less than context size, then the smaller one would be use.
  4. \n
  5. There are many reasons why gensim implementation is faster. First, gensim was optimized heavily, all operations are faster than naive python operations, especially data I/O. Second, some preprocessing steps such as min_count filtering in gensim would reduce the dataset size. More importantly, gensim uses negative_sampling_loss, which is much faster than sampled_softmax_loss, I guess this is the main reason.
  6. \n
  7. Is it easier to find somethings when there are many of them? Just kidding ;-)
    \nIt's true that there are many solutions in this non-convex optimization problem, so the model would just find a local optimum. Interestingly, in neural network, most local optima are \"good enough\". It has been observed that stochastic gradient descent seems to find better local optima than larger batch gradient descent, although this is still a riddle in current research.
  8. \n
\n", + "system": "" + }, + { + "instruction": "How can I solve 'ran out of gpu memory' in TensorFlow", + "input": "", + "output": "

I was encountering out of memory errors when training a small CNN on a GTX 970. Through somewhat of a fluke, I discovered that telling TensorFlow to allocate memory on the GPU as needed (instead of up front) resolved all my issues. This can be accomplished using the following Python code:

\n\n
    config = tf.ConfigProto()\n    config.gpu_options.allow_growth = True\n    sess = tf.Session(config=config)\n
\n\n

Previously, TensorFlow would pre-allocate ~90% of GPU memory. For some unknown reason, this would later result in out-of-memory errors even though the model could fit entirely in GPU memory. By using the above code, I no longer have OOM errors.

\n\n

Note: If the model is too big to fit in GPU memory, this probably won't help!

\n", + "system": "" + }, + { + "instruction": "Tensorflow read images with labels", + "input": "", + "output": "

Using slice_input_producer provides a solution which is much cleaner. Slice Input Producer allows us to create an Input Queue containing arbitrarily many separable values. This snippet of the question would look like this:

\n\n
def read_labeled_image_list(image_list_file):\n    \"\"\"Reads a .txt file containing pathes and labeles\n    Args:\n       image_list_file: a .txt file with one /path/to/image per line\n       label: optionally, if set label will be pasted after each line\n    Returns:\n       List with all filenames in file image_list_file\n    \"\"\"\n    f = open(image_list_file, 'r')\n    filenames = []\n    labels = []\n    for line in f:\n        filename, label = line[:-1].split(' ')\n        filenames.append(filename)\n        labels.append(int(label))\n    return filenames, labels\n\ndef read_images_from_disk(input_queue):\n    \"\"\"Consumes a single filename and label as a ' '-delimited string.\n    Args:\n      filename_and_label_tensor: A scalar string tensor.\n    Returns:\n      Two tensors: the decoded image, and the string label.\n    \"\"\"\n    label = input_queue[1]\n    file_contents = tf.read_file(input_queue[0])\n    example = tf.image.decode_png(file_contents, channels=3)\n    return example, label\n\n# Reads pfathes of images together with their labels\nimage_list, label_list = read_labeled_image_list(filename)\n\nimages = ops.convert_to_tensor(image_list, dtype=dtypes.string)\nlabels = ops.convert_to_tensor(label_list, dtype=dtypes.int32)\n\n# Makes an input queue\ninput_queue = tf.train.slice_input_producer([images, labels],\n                                            num_epochs=num_epochs,\n                                            shuffle=True)\n\nimage, label = read_images_from_disk(input_queue)\n\n# Optional Preprocessing or Data Augmentation\n# tf.image implements most of the standard image augmentation\nimage = preprocess_image(image)\nlabel = preprocess_label(label)\n\n# Optional Image and Label Batching\nimage_batch, label_batch = tf.train.batch([image, label],\n                                          batch_size=batch_size)\n
\n\n

See also the generic_input_producer from the TensorVision examples for full input-pipeline.

\n", + "system": "" + }, + { + "instruction": "How do I start tensorflow docker jupyter notebook", + "input": "", + "output": "

For a Linux host Robert Graves answer will work, but for Mac OS X or Windows there is more to be done because docker runs in a virtual machine.

\n\n

So to begin launch the docker shell (or any shell if you are using Linux) and run the following command to launch a new TensorFlow container:

\n\n
docker run -p 8888:8888 -p 6006:6006 b.gcr.io/tensorflow/tensorflow ./run_jupyter.sh\n
\n\n

Then for Mac OS X and Windows you need to do the following only once:

\n\n
    \n
  1. Open VirtualBox
  2. \n
  3. Click on the docker vm (mine was automatically named \"default\")
  4. \n
  5. Open the settings by clicking settings
  6. \n
  7. In the network settings open the port forwarding dialog
  8. \n
  9. Click the + symbol to add another port and connect a port from your mac to the VM by filling in the dialog as shown below. In this example I chose port 8810 because I run other notebooks using port 8888.\n\"enter
  10. \n
  11. then open a browser and connect to http://localhost:8810 (or whichever port you set in the host port section
  12. \n
  13. Make your fancy pants machine learning app!
  14. \n
\n", + "system": "" + }, + { + "instruction": "ImportError: Failed to import any qt binding, Python - Tensorflow", + "input": "", + "output": "

make sure you have PyQt5 installed. you may open a python shell and try:

\n\n
import PyQt5\n
\n\n

if it fails then you can install it via:

\n\n
pip install PyQt5\n
\n\n

If you are on macOS or Linux be careful that you might need to run

\n\n
pip3 install PyQt5\n
\n", + "system": "" + }, + { + "instruction": "What is the difference between CuDNNLSTM and LSTM in Keras?", + "input": "", + "output": "

Why don't you try it out for yourself and see?\nIn my case, training a model with LSTM took 10mins 30seconds.\nSimply switching the call from LSTM() to CuDNNLSTM() took less than a minute.

\n\n

I also noticed that switching to CuDNNLSTM() speeds up model.evaluate() and model.predict() substantially as well.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: How to replace or modify gradient?", + "input": "", + "output": "

For TensorFlow 1.7 and TensorFlow 2.0 look at edit blow.

\n\n
\n\n

First define your custom gradient:

\n\n
@tf.RegisterGradient(\"CustomGrad\")\ndef _const_mul_grad(unused_op, grad):\n  return 5.0 * grad\n
\n\n

Since you want nothing to happen in the forward pass, override the gradient of an identity operation with your new gradient:

\n\n
g = tf.get_default_graph()\nwith g.gradient_override_map({\"Identity\": \"CustomGrad\"}):\n  output = tf.identity(input, name=\"Identity\")\n
\n\n

Here is a working example with a layer that clips gradients in the backwards pass and does nothing in the forwards pass, using the same method:

\n\n
import tensorflow as tf\n\n@tf.RegisterGradient(\"CustomClipGrad\")\ndef _clip_grad(unused_op, grad):\n  return tf.clip_by_value(grad, -0.1, 0.1)\n\ninput = tf.Variable([3.0], dtype=tf.float32)\n\ng = tf.get_default_graph()\nwith g.gradient_override_map({\"Identity\": \"CustomClipGrad\"}):\n  output_clip = tf.identity(input, name=\"Identity\")\ngrad_clip = tf.gradients(output_clip, input)\n\n# output without gradient clipping in the backwards pass for comparison:\noutput = tf.identity(input)\ngrad = tf.gradients(output, input)\n\nwith tf.Session() as sess:\n  sess.run(tf.global_variables_initializer())\n  print(\"with clipping:\", sess.run(grad_clip)[0])\n  print(\"without clipping:\", sess.run(grad)[0])\n
\n\n
\n\n

Edit for TensorFlow 1.7 and TensorFlow 2.0

\n\n

Since 1.7 there is a new way to redefine the gradient with shorter syntax, which also works with Tensorflow 2.0. It also allows to redefine the gradient of multiple operations at the same time. Here are the examples from above, rewritten for TensorFlow 1.7 and TensorFlow 2.0:

\n\n

Layer that scales gradients in the backward pass:

\n\n
@tf.custom_gradient\ndef scale_grad_layer(x):\n  def grad(dy):\n    return 5.0 * dy\n  return tf.identity(x), grad\n
\n\n

Example with a layer that clips gradients in the backward pass:

\n\n
@tf.custom_gradient\ndef clip_grad_layer(x):\n  def grad(dy):\n    return tf.clip_by_value(dy, -0.1, 0.1)\n  return tf.identity(x), grad\n
\n", + "system": "" + }, + { + "instruction": "Custom loss function in Keras", + "input": "", + "output": "

All you have to do is define a function for that, using keras backend functions for calculations. The function must take the true values and the model predicted values.

\n

Now, since I'm not sure about what are g, q, x an y in your function, I'll just create a basic example here without caring about what it means or whether it's an actual useful function:

\n
import keras.backend as K\n\ndef customLoss(yTrue,yPred):\n    return K.sum(K.log(yTrue) - K.log(yPred))\n    \n
\n

All backend functions can be seen here.

\n

After that, compile your model using that function instead of a regular one:

\n
model.compile(loss=customLoss, optimizer = .....)\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow crashes with CUBLAS_STATUS_ALLOC_FAILED", + "input": "", + "output": "

For TensorFlow 2.2 none of the other answers worked when the CUBLAS_STATUS_ALLOC_FAILED problem was encountered. Found a solution on https://www.tensorflow.org/guide/gpu:

\n
import tensorflow as tf\ngpus = tf.config.experimental.list_physical_devices('GPU')\nif gpus:\n    try:\n        # Currently, memory growth needs to be the same across GPUs\n        for gpu in gpus:\n            tf.config.experimental.set_memory_growth(gpu, True)\n        logical_gpus = tf.config.experimental.list_logical_devices('GPU')\n        print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")\n    except RuntimeError as e:\n        # Memory growth must be set before GPUs have been initialized\n        print(e)\n
\n

I ran this code before any further calculations are made and found that the same code that produced CUBLAS error before now worked in same session. The sample code above is a specific example that sets the memory growth across a number of physical GPUs but it also solves the memory expansion problem.

\n", + "system": "" + }, + { + "instruction": "CBOW v.s. skip-gram: why invert context and target words?", + "input": "", + "output": "

Here is my oversimplified and rather naive understanding of the difference:

\n

As we know, CBOW is learning to predict the word by the context. Or maximize the probability of the target word by looking at the context. And this happens to be a problem for rare words. For example, given the context yesterday was a really [...] day CBOW model will tell you that most probably the word is beautiful or nice. Words like delightful will get much less attention of the model, because it is designed to predict the most probable word. This word will be smoothed over a lot of examples with more frequent words.

\n

On the other hand, the skip-gram model is designed to predict the context. Given the word delightful it must understand it and tell us that there is a huge probability that the context is yesterday was really [...] day, or some other relevant context. With skip-gram the word delightful will not try to compete with the word beautiful but instead, delightful+context pairs will be treated as new observations.

\n

UPDATE

\n

Thanks to @0xF for sharing this article

\n
\n

According to Mikolov

\n

Skip-gram: works well with small amount of the training data, represents well even rare words or phrases.

\n

CBOW: several times faster to train than the skip-gram, slightly better accuracy for the frequent words

\n
\n

One more addition to the subject is found here:

\n
\n

In the "skip-gram" mode alternative to "CBOW", rather than averaging\nthe context words, each is used as a pairwise training example. That\nis, in place of one CBOW example such as [predict 'ate' from\naverage('The', 'cat', 'the', 'mouse')], the network is presented with\nfour skip-gram examples [predict 'ate' from 'The'], [predict 'ate'\nfrom 'cat'], [predict 'ate' from 'the'], [predict 'ate' from 'mouse'].\n(The same random window-reduction occurs, so half the time that would\njust be two examples, of the nearest words.)

\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow: Dst tensor is not initialized", + "input": "", + "output": "

For brevity, this error message is generated when there is not enough memory to handle the batch size.

\n\n

Expanding on Steven's link (I cannot post comments yet), here are a few tricks to monitor/control memory usage in Tensorflow:

\n\n\n", + "system": "" + }, + { + "instruction": "Error when checking target: expected dense_3 to have shape (3,) but got array with shape (1,)", + "input": "", + "output": "

The problem is with your label-data shape. In a multiclass problem you are predicting the probabibility of every possible class, so must provide label data in (N, m) shape, where N is the number of training examples, and m is the number of possible classes (3 in your case).

\n\n

Keras expects y-data in (N, 3) shape, not (N,) as you've problably provided, that's why it raises an error.

\n\n

Use e.g. OneHotEncoder to convert your label data to one-hot encoded form.

\n", + "system": "" + }, + { + "instruction": "What is the meaning of the "None" in model.summary of KERAS?", + "input": "", + "output": "

None means this dimension is variable.

\n\n

The first dimension in a keras model is always the batch size. You don't need fixed batch sizes, unless in very specific cases (for instance, when working with stateful=True LSTM layers).

\n\n

That's why this dimension is often ignored when you define your model. For instance, when you define input_shape=(100,200), actually you're ignoring the batch size and defining the shape of \"each sample\". Internally the shape will be (None, 100, 200), allowing a variable batch size, each sample in the batch having the shape (100,200).

\n\n

The batch size will be then automatically defined in the fit or predict methods.

\n\n
\n\n

Other None dimensions:

\n\n

Not only the batch dimension can be None, but many others as well.

\n\n

For instance, in a 2D convolutional network, where the expected input is (batchSize, height, width, channels), you can have shapes like (None, None, None, 3), allowing variable image sizes.

\n\n

In recurrent networks and in 1D convolutions, you can also make the length/timesteps dimension variable, with shapes like (None, None, featuresOrChannels)

\n", + "system": "" + }, + { + "instruction": "How do I get the weights of a layer in Keras?", + "input": "", + "output": "

If you want to get weights and biases of all layers, you can simply use:

\n\n
for layer in model.layers: print(layer.get_config(), layer.get_weights())\n
\n\n

This will print all information that's relevant.

\n\n

If you want the weights directly returned as numpy arrays, you can use:

\n\n
first_layer_weights = model.layers[0].get_weights()[0]\nfirst_layer_biases  = model.layers[0].get_weights()[1]\nsecond_layer_weights = model.layers[1].get_weights()[0]\nsecond_layer_biases  = model.layers[1].get_weights()[1]\n
\n\n

etc.

\n", + "system": "" + }, + { + "instruction": "What does tf.gfile do in TensorFlow?", + "input": "", + "output": "

For anyone landing here, the following answer was provided (by a googler) on: Why use tensorflow gfile? (for file I/O)

\n\n
\n

The main roles of the tf.gfile module are:

\n \n
    \n
  1. To provide an API that is close to Python's file objects, and

  2. \n
  3. To provide an implementation based on TensorFlow's C++ FileSystem API.

  4. \n
\n \n

The C++ FileSystem API supports multiple file system implementations,\n including local files, Google Cloud Storage (using a gs:// prefix),\n and HDFS (using an hdfs:// prefix). TensorFlow exports these as\n tf.gfile, so that you can use these implementations for saving and\n loading checkpoints, writing TensorBoard logs, and accessing training\n data (among other uses). However, if all of your files are local, you\n can use the regular Python file API without any problem.

\n
\n", + "system": "" + }, + { + "instruction": "What is difference between tf.truncated_normal and tf.random_normal?", + "input": "", + "output": "

The documentation says it all:\nFor truncated normal distribution:

\n
\n

The values are drawn from a normal distribution with specified mean and standard deviation, discarding and re-drawing any samples that are more than two standard deviations from the mean.

\n
\n

Most probably it is easy to understand the difference by plotting the graph for yourself (%magic is because I use jupyter notebook):

\n
import tensorflow as tf\nimport matplotlib.pyplot as plt\n\n%matplotlib inline  \n\nn = 500000\nA = tf.truncated_normal((n,))\nB = tf.random_normal((n,))\nwith tf.Session() as sess:\n    a, b = sess.run([A, B])\n
\n

And now

\n
plt.hist(a, 100, (-4.2, 4.2));\nplt.hist(b, 100, (-4.2, 4.2));\n
\n

\"enter

\n
\n

The point for using truncated normal is to overcome saturation of tome functions like sigmoid (where if the value is too big/small, the neuron stops learning).

\n", + "system": "" + }, + { + "instruction": "Why do we need TensorFlow tf.Graph?", + "input": "", + "output": "

TL;DR: It's unnecessary, but it's a good practice to follow.

\n\n

Since a default graph is always registered, every op and variable is placed into the default graph.\nThe statement, however, creates a new graph and places everything (declared inside its scope) into this graph.\nIf the graph is the only graph, it's useless. But it's a good practice because if you start to work with many graphs it's easier to understand where ops and vars are placed.\nSince this statement costs you nothing, it's better to write it anyway. Just to be sure that if you refactor the code in the future, the operations defined belong to the graph you choose initially.

\n", + "system": "" + }, + { + "instruction": "CUDA_ERROR_OUT_OF_MEMORY in tensorflow", + "input": "", + "output": "

In case it's still relevant for someone, I encountered this issue when trying to run Keras/Tensorflow for the second time, after a first run was aborted. It seems the GPU memory is still allocated, and therefore cannot be allocated again. It was solved by manually ending all python processes that use the GPU, or alternatively, closing the existing terminal and running again in a new terminal window.

\n", + "system": "" + }, + { + "instruction": "In tensorflow what is the difference between tf.add and operator (+)?", + "input": "", + "output": "

There's no difference in precision between a+b and tf.add(a, b). The former translates to a.__add__(b) which gets mapped to tf.add by means of following line in math_ops.py

\n\n

_OverrideBinaryOperatorHelper(gen_math_ops.add, \"add\")

\n\n

The only difference is that node name in the underlying Graph is add instead of Add. You can generally compare things by looking at the underlying Graph representation like this

\n\n
tf.reset_default_graph()\ndtype = tf.int32\na = tf.placeholder(dtype)\nb = tf.placeholder(dtype)\nc = a+b\nprint(tf.get_default_graph().as_graph_def())\n
\n\n

You could also see this directly by inspecting the __add__ method. There's an extra level of indirection because it's a closure, but you can get the underlying function as follows

\n\n
real_function = tf.Tensor.__add__.im_func.func_closure[0].cell_contents\nprint(real_function.__module__ + \".\" + real_function.__name__)\nprint(tf.add.__module__ + \".\" + tf.add.__name__)\n
\n\n

And you'll see output below which means that they call same underlying function

\n\n
tensorflow.python.ops.gen_math_ops.add\ntensorflow.python.ops.gen_math_ops.add\n
\n\n

You can see from tf.Tensor.OVERLOADABLE_OPERATORS that following Python special methods are potentially overloaded by appropriate TensorFlow versions

\n\n
{'__abs__',\n '__add__',\n '__and__',\n '__div__',\n '__floordiv__',\n '__ge__',\n '__getitem__',\n '__gt__',\n '__invert__',\n '__le__',\n '__lt__',\n '__mod__',\n '__mul__',\n '__neg__',\n '__or__',\n '__pow__',\n '__radd__',\n '__rand__',\n '__rdiv__',\n '__rfloordiv__',\n '__rmod__',\n '__rmul__',\n '__ror__',\n '__rpow__',\n '__rsub__',\n '__rtruediv__',\n '__rxor__',\n '__sub__',\n '__truediv__',\n '__xor__'}\n
\n\n

Those methods are described in Python reference 3.3.7: emulating numeric types. Note that Python data model does not provide a way to overload assignment operator = so assignment always uses native Python implementation.

\n", + "system": "" + }, + { + "instruction": "How does reduce_sum() work in tensorflow?", + "input": "", + "output": "

x has a shape of (2, 3) (two rows and three columns):

\n\n
1 1 1\n1 1 1\n
\n\n

By doing tf.reduce_sum(x, 0) the tensor is reduced along the first dimension (rows), so the result is [1, 1, 1] + [1, 1, 1] = [2, 2, 2].

\n\n

By doing tf.reduce_sum(x, 1) the tensor is reduced along the second dimension (columns), so the result is [1, 1] + [1, 1] + [1, 1] = [3, 3].

\n\n

By doing tf.reduce_sum(x, [0, 1]) the tensor is reduced along BOTH dimensions (rows and columns), so the result is 1 + 1 + 1 + 1 + 1 + 1 = 6 or, equivalently, [1, 1, 1] + [1, 1, 1] = [2, 2, 2], and then 2 + 2 + 2 = 6 (reduce along rows, then reduce the resulted array).

\n", + "system": "" + }, + { + "instruction": "TypeError: 'Tensor' object does not support item assignment in TensorFlow", + "input": "", + "output": "

In general, a TensorFlow tensor object is not assignable, so you cannot use it on the left-hand side of an assignment.

\n

The easiest way to do what you're trying to do is to build a Python list of tensors, and tf.stack() them together at the end of the loop:

\n
outputs, states = rnn.rnn(lstm_cell, x, initial_state=initial_state,\n                          sequence_length=real_length)\n\noutput_list = []\n\ntensor_shape = outputs.get_shape()\nfor step_index in range(tensor_shape[0]):\n    word_index = self.x[:, step_index]\n    word_index = tf.reshape(word_index, [-1,1])\n    index_weight = tf.gather(word_weight, word_index)\n    output_list.append(tf.mul(outputs[step_index, :, :] , index_weight))\n\noutputs = tf.stack(output_list)\n
\n
\n

\u00a0* With the exception of tf.Variable objects, using the Variable.assign() etc. methods. However, rnn.rnn() likely returns a tf.Tensor object that does not support this method.

\n", + "system": "" + }, + { + "instruction": "TensorFlow: training on my own image", + "input": "", + "output": "

If you are interested in how to input your own data in TensorFlow, you can look at this tutorial.
\nI've also written a guide with best practices for CS230 at Stanford here.

\n\n
\n\n

New answer (with tf.data) and with labels

\n\n

With the introduction of tf.data in r1.4, we can create a batch of images without placeholders and without queues. The steps are the following:

\n\n
    \n
  1. Create a list containing the filenames of the images and a corresponding list of labels
  2. \n
  3. Create a tf.data.Dataset reading these filenames and labels
  4. \n
  5. Preprocess the data
  6. \n
  7. Create an iterator from the tf.data.Dataset which will yield the next batch
  8. \n
\n\n

The code is:

\n\n
# step 1\nfilenames = tf.constant(['im_01.jpg', 'im_02.jpg', 'im_03.jpg', 'im_04.jpg'])\nlabels = tf.constant([0, 1, 0, 1])\n\n# step 2: create a dataset returning slices of `filenames`\ndataset = tf.data.Dataset.from_tensor_slices((filenames, labels))\n\n# step 3: parse every image in the dataset using `map`\ndef _parse_function(filename, label):\n    image_string = tf.read_file(filename)\n    image_decoded = tf.image.decode_jpeg(image_string, channels=3)\n    image = tf.cast(image_decoded, tf.float32)\n    return image, label\n\ndataset = dataset.map(_parse_function)\ndataset = dataset.batch(2)\n\n# step 4: create iterator and final input tensor\niterator = dataset.make_one_shot_iterator()\nimages, labels = iterator.get_next()\n
\n\n

Now we can run directly sess.run([images, labels]) without feeding any data through placeholders.

\n\n
\n\n

Old answer (with TensorFlow queues)

\n\n

To sum it up you have multiple steps:

\n\n
    \n
  1. Create a list of filenames (ex: the paths to your images)
  2. \n
  3. Create a TensorFlow filename queue
  4. \n
  5. Read and decode each image, resize them to a fixed size (necessary for batching)
  6. \n
  7. Output a batch of these images
  8. \n
\n\n
\n\n

The simplest code would be:\n

\n\n
# step 1\nfilenames = ['im_01.jpg', 'im_02.jpg', 'im_03.jpg', 'im_04.jpg']\n\n# step 2\nfilename_queue = tf.train.string_input_producer(filenames)\n\n# step 3: read, decode and resize images\nreader = tf.WholeFileReader()\nfilename, content = reader.read(filename_queue)\nimage = tf.image.decode_jpeg(content, channels=3)\nimage = tf.cast(image, tf.float32)\nresized_image = tf.image.resize_images(image, [224, 224])\n\n# step 4: Batching\nimage_batch = tf.train.batch([resized_image], batch_size=8)\n
\n", + "system": "" + }, + { + "instruction": "Best way to flatten a 2D tensor containing a vector in TensorFlow?", + "input": "", + "output": "

Both tf.reshape(w, [-1]) and tf.squeeze(w) are \"cheap\" in that they operate only on the metadata (i.e. the shape) of the given tensor, and don't modify the data itself. Of the two tf.reshape() has slightly simpler logic internally, but the performance of the two should be indistinguishable.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: Using Adam optimizer", + "input": "", + "output": "

The AdamOptimizer class creates additional variables, called \"slots\", to hold values for the \"m\" and \"v\" accumulators.

\n\n

See the source here if you're curious, it's actually quite readable:\nhttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/adam.py#L39 . Other optimizers, such as Momentum and Adagrad use slots too.

\n\n

These variables must be initialized before you can train a model.

\n\n

The normal way to initialize variables is to call tf.initialize_all_variables() which adds ops to initialize the variables present in the graph when it is called.

\n\n

(Aside: unlike its name suggests, initialize_all_variables() does not initialize anything, it only add ops that will initialize the variables when run.)

\n\n

What you must do is call initialize_all_variables() after you have added the optimizer:

\n\n
...build your model...\n# Add the optimizer\ntrain_op = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)\n# Add the ops to initialize variables.  These will include \n# the optimizer slots added by AdamOptimizer().\ninit_op = tf.initialize_all_variables()\n\n# launch the graph in a session\nsess = tf.Session()\n# Actually intialize the variables\nsess.run(init_op)\n# now train your model\nfor ...:\n  sess.run(train_op)\n
\n", + "system": "" + }, + { + "instruction": "How to use stop_gradient in Tensorflow", + "input": "", + "output": "

tf.stop_gradient provides a way to not compute gradient with respect to some variables during back-propagation.

\n

For example, in the code below, we have three variables, w1, w2, w3 and input x. The loss is square((x1.dot(w1) - x.dot(w2 * w3))). We want to minimize this loss wrt to w1 but want to keep w2 and w3 fixed. To achieve this we can just put tf.stop_gradient(tf.matmul(x, w2*w3)).

\n

In the figure below, I plotted how w1, w2, and w3 from their initial values as the function of training iterations. It can be seen that w2 and w3 remain fixed while w1 changes until it becomes equal to w2 * w3.

\n

An image showing that w1 only learns but not w2 and w3:

\n

\"An

\n
import tensorflow as tf\nimport numpy as np\n\nw1 = tf.get_variable("w1", shape=[5, 1], initializer=tf.truncated_normal_initializer())\nw2 = tf.get_variable("w2", shape=[5, 1], initializer=tf.truncated_normal_initializer())\nw3 = tf.get_variable("w3", shape=[5, 1], initializer=tf.truncated_normal_initializer())\nx = tf.placeholder(tf.float32, shape=[None, 5], name="x")\n\n\na1 = tf.matmul(x, w1)\na2 = tf.matmul(x, w2*w3)\na2 = tf.stop_gradient(a2)\nloss = tf.reduce_mean(tf.square(a1 - a2))\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)\ngradients = optimizer.compute_gradients(loss)\ntrain_op = optimizer.apply_gradients(gradients)\n
\n", + "system": "" + }, + { + "instruction": "AttributeError: module 'numpy' has no attribute 'typeDict'", + "input": "", + "output": "

I was trying to use the package pyensembl and ran into this same issue. I was able to work around it for now with

\n
pip install numpy==1.21\n
\n

Which should suffice until some of these less active packages are able to update to the new API.

\n", + "system": "" + }, + { + "instruction": "cannot import name '_registerMatType' from 'cv2.cv2'", + "input": "", + "output": "

The same thing occurred to me yesterday when I used Colab. A possible reason may be that the version of opencv-python(4.1.2.30) does not match opencv-python-headless(4.5.5.62). Or the latest version 4.5.5 may have something wrong...

\n

I uninstalled opencv-python-headless==4.5.5.62 and installed 4.1.2.30 and it fixed.

\n", + "system": "" + }, + { + "instruction": "Could not load dynamic library 'libnvinfer.so.6'", + "input": "", + "output": "

This is a warning, not an error. You can still use TensorFlow. The shared libraries libnvinfer and libnvinfer_plugin are optional and required only if you are using nvidia's TensorRT capabilities.

\n

To suppress this and all other warnings, set the environment variable TF_CPP_MIN_LOG_LEVEL="2".

\n", + "system": "" + }, + { + "instruction": "How to get allocated GPU spec in Google Colab", + "input": "", + "output": "

Since you can run bash command in colab, just run !nvidia-smi:\n\"enter

\n", + "system": "" + }, + { + "instruction": "Tensorboard not found as magic function in jupyter", + "input": "", + "output": "

UPDATE

\n

For newer TF versions (tensorflow>=1.14.0 & tensorflow != 2.0.0a0 - newer than TF2.0-alpha) load the extension like this

\n
%load_ext tensorboard\n
\n

OLD ANSWER

\n

The extension needs to be loaded first:

\n
%load_ext tensorboard.notebook\n%tensorboard --logdir {logs_base_dir}\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow Custom TFLite java.lang.NullPointerException: Cannot allocate memory for the interpreter", + "input": "", + "output": "

This is due to the computer you are using, make sure you have enough ram to allocate for the program, or try increasing your swap file size?

\n", + "system": "" + }, + { + "instruction": "what is the difference between Flatten() and GlobalAveragePooling2D() in keras", + "input": "", + "output": "

That both seem to work doesn't mean they do the same.

\n\n

Flatten will take a tensor of any shape and transform it into a one dimensional tensor (plus the samples dimension) but keeping all values in the tensor. For example a tensor (samples, 10, 20, 1) will be flattened to (samples, 10 * 20 * 1).

\n\n

GlobalAveragePooling2D does something different. It applies average pooling on the spatial dimensions until each spatial dimension is one, and leaves other dimensions unchanged. In this case values are not kept as they are averaged. For example a tensor (samples, 10, 20, 1) would be output as (samples, 1, 1, 1), assuming the 2nd and 3rd dimensions were spatial (channels last).

\n", + "system": "" + }, + { + "instruction": "TensorFlow: "Attempting to use uninitialized value" in variable initialization", + "input": "", + "output": "

Run this:

\n\n
init = tf.global_variables_initializer()\nsess.run(init)\n
\n\n

Or (depending on the version of TF that you have):

\n\n
init = tf.initialize_all_variables()\nsess.run(init)\n
\n", + "system": "" + }, + { + "instruction": "Cannot dlopen some GPU libraries. Skipping registering GPU devices", + "input": "", + "output": "

Before you do anything more drastic, maybe you just need to set environment variables CUDNN_PATH and/or LD_LIBRARY_PATH.

\n

Check with:

\n
echo $CUDNN_PATH       # should exist and give a good path\nls $CUDNN_PATH         # should contain stuff like a lib subdir with libcudnn .so files\necho $LD_LIBRARY_PATH  # should exist and contain CUDNN_PATH\n
\n

If you need changes, I set them with:

\n
export CUDNN_PATH=$(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))\nexport LD_LIBRARY_PATH=${CUDNN_PATH}/lib\n
\n

But there are other ways, like @User_Rebo's:

\n
export CUDNN_PATH="$HOME/.local/lib/python3.10/site-packages/nvidia/cudnn"\nexport LD_LIBRARY_PATH="$CUDNN_PATH/lib":"/usr/local/cuda/lib64"\n
\n

Some people set these in their .bashrc; that's an easy way to not forget, but I personally prefer to set them in each session I want to use them.

\n", + "system": "" + }, + { + "instruction": "How to replace (or insert) intermediate layer in Keras model?", + "input": "", + "output": "

The following function allows you to insert a new layer before, after or to replace each layer in the original model whose name matches a regular expression, including non-sequential models such as DenseNet or ResNet.

\n\n
import re\nfrom keras.models import Model\n\ndef insert_layer_nonseq(model, layer_regex, insert_layer_factory,\n                        insert_layer_name=None, position='after'):\n\n    # Auxiliary dictionary to describe the network graph\n    network_dict = {'input_layers_of': {}, 'new_output_tensor_of': {}}\n\n    # Set the input layers of each layer\n    for layer in model.layers:\n        for node in layer._outbound_nodes:\n            layer_name = node.outbound_layer.name\n            if layer_name not in network_dict['input_layers_of']:\n                network_dict['input_layers_of'].update(\n                        {layer_name: [layer.name]})\n            else:\n                network_dict['input_layers_of'][layer_name].append(layer.name)\n\n    # Set the output tensor of the input layer\n    network_dict['new_output_tensor_of'].update(\n            {model.layers[0].name: model.input})\n\n    # Iterate over all layers after the input\n    model_outputs = []\n    for layer in model.layers[1:]:\n\n        # Determine input tensors\n        layer_input = [network_dict['new_output_tensor_of'][layer_aux] \n                for layer_aux in network_dict['input_layers_of'][layer.name]]\n        if len(layer_input) == 1:\n            layer_input = layer_input[0]\n\n        # Insert layer if name matches the regular expression\n        if re.match(layer_regex, layer.name):\n            if position == 'replace':\n                x = layer_input\n            elif position == 'after':\n                x = layer(layer_input)\n            elif position == 'before':\n                pass\n            else:\n                raise ValueError('position must be: before, after or replace')\n\n            new_layer = insert_layer_factory()\n            if insert_layer_name:\n                new_layer.name = insert_layer_name\n            else:\n                new_layer.name = '{}_{}'.format(layer.name, \n                                                new_layer.name)\n            x = new_layer(x)\n            print('New layer: {} Old layer: {} Type: {}'.format(new_layer.name,\n                                                            layer.name, position))\n            if position == 'before':\n                x = layer(x)\n        else:\n            x = layer(layer_input)\n\n        # Set new output tensor (the original one, or the one of the inserted\n        # layer)\n        network_dict['new_output_tensor_of'].update({layer.name: x})\n\n        # Save tensor in output list if it is output in initial model\n        if layer_name in model.output_names:\n            model_outputs.append(x)\n\n    return Model(inputs=model.inputs, outputs=model_outputs)\n\n
\n\n

The difference with respect to the simpler case of a purely sequential model is that before iterating over the layers to find the key layer, you first parse the graph and store the input layers of each layer in an auxiliary dictionary. Then, as you iterate over the layers, you also store the new output tensor of each layer, which is used to determine the input layers of each layer, when building the new model.

\n\n

A use case would be the following, where a Dropout layer is inserted after each activation layer of ResNet50:

\n\n
from keras.applications.resnet50 import ResNet50\nfrom keras.models import load_model\n\nmodel = ResNet50()\ndef dropout_layer_factory():\n    return Dropout(rate=0.2, name='dropout')\nmodel = insert_layer_nonseq(model, '.*activation.*', dropout_layer_factory)\n\n# Fix possible problems with new model\nmodel.save('temp.h5')\nmodel = load_model('temp.h5')\n\nmodel.summary()\n
\n", + "system": "" + }, + { + "instruction": "ImportError: libcublas.so.9.0: cannot open shared object file", + "input": "", + "output": "

You will need to update your LD_LIBRARY_PATH, so that it points to the /usr/local/cuda-9.0/lib64.\nAdd the following line to your .bashrc file (or any other terminal you use)

\n\n
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-9.0/lib64/\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow not running on GPU", + "input": "", + "output": "

To check which devices are available to TensorFlow you can use this and see if the GPU cards are available:

\n
from tensorflow.python.client import device_lib\nprint(device_lib.list_local_devices())\n
\n
\n

More info

\n

There are also C++ logs available controlled by the TF_CPP_MIN_VLOG_LEVEL env variable, e.g.:

\n
import os\nos.environ["TF_CPP_MIN_VLOG_LEVEL"] = "2"\n
\n

should allow them to be printed when running import tensorflow as tf.

\n

You should see this kind of logs if you use GPU-enabled tensorflow with proper access to the GPU machine:

\n
successfully opened CUDA library libcublas.so.*.* locally\nsuccessfully opened CUDA library libcudnn.so.*.*  locally\nsuccessfully opened CUDA library libcufft.so.*.*  locally\n
\n

On the other hand, if there are no CUDA libraries in the system / container, you will see:

\n
Could not find cuda drivers on your machine, GPU will not be used.\n
\n

and where CUDA are installed, but there is no GPU physically available, TF will import cleanly and error only later, when you run device_lib.list_local_devices() with this:

\n
failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected\n\n
\n", + "system": "" + }, + { + "instruction": "What is the mathematics behind the "smoothing" parameter in TensorBoard's scalar graphs?", + "input": "", + "output": "

ORIGINAL ANSWER

\n

It is called exponential moving average, below is a code explanation how it is created.

\n

Assuming all the real scalar values are in a list called scalars the smoothing is applied as follows:

\n
def smooth(scalars: List[float], weight: float) -> List[float]:  # Weight between 0 and 1\n    last = scalars[0]  # First value in the plot (first timestep)\n    smoothed = list()\n    for point in scalars:\n        smoothed_val = last * weight + (1 - weight) * point  # Calculate smoothed value\n        smoothed.append(smoothed_val)                        # Save it\n        last = smoothed_val                                  # Anchor the last smoothed value\n        \n    return smoothed\n
\n

UPDATED ANSWER

\n

As @SaPropper correctly pointed out, TensorBoard now includes the debiasing factor.

\n", + "system": "" + }, + { + "instruction": "Convert between NHWC and NCHW in TensorFlow", + "input": "", + "output": "

All you need to do is a permutation of the dimensions from NHWC to NCHW (or the contrary).

\n

The meaning of each letter might help understand:

\n\n
\n

From NHWC to NCHW

\n

The image shape is (N, H, W, C) and we want the output to have shape (N, C, H, W). Therefore we need to apply tf.transpose with a well chosen permutation perm.

\n
\n

The returned tensor's dimension i will correspond to the input dimension perm[i]

\n
\n
perm[0] = 0  # output dimension 0 will be 'N', which was dimension 0 in the input\nperm[1] = 3  # output dimension 1 will be 'C', which was dimension 3 in the input\nperm[2] = 1  # output dimension 2 will be 'H', which was dimension 1 in the input\nperm[3] = 2  # output dimension 3 will be 'W', which was dimension 2 in the input\n
\n

In practice:

\n
images_nhwc = tf.placeholder(tf.float32, [None, 200, 300, 3])  # input batch\nout = tf.transpose(images_nhwc, [0, 3, 1, 2])\nprint(out.get_shape())  # the shape of out is [None, 3, 200, 300]\n
\n
\n

From NCHW to NHWC

\n

The image shape is (N, C, H, W) and we want the output to have shape (N, H, W, C). Therefore we need to apply tf.transpose with a well chosen permutation perm.

\n
\n

The returned tensor's dimension i will correspond to the input dimension perm[i]

\n
\n
perm[0] = 0  # output dimension 0 will be 'N', which was dimension 0 in the input\nperm[1] = 2  # output dimension 1 will be 'H', which was dimension 2 in the input\nperm[2] = 3  # output dimension 2 will be 'W', which was dimension 3 in the input\nperm[3] = 1  # output dimension 3 will be 'C', which was dimension 1 in the input\n
\n

In practice:

\n
images_nchw = tf.placeholder(tf.float32, [None, 3, 200, 300])  # input batch\nout = tf.transpose(images_nchw, [0, 2, 3, 1])\nprint(out.get_shape())  # the shape of out is [None, 200, 300, 3]\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow: How to get a tensor by name?", + "input": "", + "output": "

There is a function tf.Graph.get_tensor_by_name(). For instance:

\n\n
import tensorflow as tf\n\nc = tf.constant([[1.0, 2.0], [3.0, 4.0]])\nd = tf.constant([[1.0, 1.0], [0.0, 1.0]])\ne = tf.matmul(c, d, name='example')\n\nwith tf.Session() as sess:\n    test =  sess.run(e)\n    print e.name #example:0\n    test = tf.get_default_graph().get_tensor_by_name(\"example:0\")\n    print test #Tensor(\"example:0\", shape=(2, 2), dtype=float32)\n
\n", + "system": "" + }, + { + "instruction": "How to install TensorFlow on Windows?", + "input": "", + "output": "
\n

How to install TensorFlow and to use it under Windows?

\n
\n\n

Updated on 8/4/16

\n\n

Windows 10 now has a Ubuntu Bash environment, AKA Bash on Ubuntu on Windows, available as a standard option (as opposed to Insider Preview updates for developers). (StackOverflow tag wsl) This option came with the Windows 10 anniversary update (Version 1607) released on 8/2/2016. This allows the use of apt-get to install software packages such as Python and TensorFlow.

\n\n

Note: Bash on Ubuntu on Windows does not have access to the GPU, so all of the GPU options for installing TensorFlow will not work.

\n\n

The dated installation instructions for Bash on Ubuntu on Windows are basically correct, but only these steps are necessary:
\nPrerequisites
\nEnable the Windows Subsystem for Linux feature (GUI)
\nReboot when prompted
\nRun Bash on Windows

\n\n

Steps no longer needed:
\nTurn on Developer Mode
\nEnable the Windows Subsystem for Linux feature (command-line)

\n\n

Then install TensorFlow using apt-get

\n\n
sudo apt-get install python3-pip python3-dev\nsudo pip3 install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0-cp34-cp34m-linux_x86_64.whl \n
\n\n

and now test TensorFlow

\n\n
$ python3\n...\n>>> import tensorflow as tf\n>>> hello = tf.constant('Hello, TensorFlow!')\n>>> sess = tf.Session()\n>>> print(sess.run(hello))\nHello, TensorFlow!\n>>> a = tf.constant(10)\n>>> b = tf.constant(32)\n>>> print(sess.run(a + b))\n42\n>>> exit()\n
\n\n

and run an actual neural network

\n\n
python3 -m tensorflow.models.image.mnist.convolutional\n
\n\n

Earlier Answer

\n\n

After learning about the developer preview of Bash on Windows.

\n\n

See Playing with TensorFlow on Windows by Scott Hanselman which uses Bash on Windows 10

\n\n

Original Answer

\n\n

Bazel is the problem

\n\n

TensorFlow is not made with build automation tools such as make, but with Google's in-house build tool Bazel. Bazel only works on systems based on Unix such as Linux and OS X.

\n\n

Since the current published/known means to build TensorFlow uses Bazel and Bazel does not work on Windows, one can not install or run TensorFlow natively on Windows.

\n\n

From Bazel FAQ

\n\n
\n

What about Windows?

\n \n

Due to its UNIX heritage, porting Bazel to Windows is significant\n work. For example, Bazel uses symlinks extensively, which has varying\n levels of support across Windows versions.

\n \n

We are currently actively working on improving Windows support, but\n it's still ways from being usable.

\n
\n\n

Status

\n\n

See: TensorFlow issue #17
\nSee: Bazel issue #276

\n\n

Solutions

\n\n

The solutions are listed in the order of complexity and work needed; from about an hour to may not even work.

\n\n
    \n
  1. Docker
    \n~ 1 hour
  2. \n
\n\n

Docker installation

\n\n

Docker is a system to build self contained versions of a Linux operating system running on your machine. When you install and run TensorFlow via Docker it completely isolates the installation from pre-existing packages on your machine.

\n\n

Also look at TensorFlow - which Docker image to use?

\n\n
    \n
  1. OS X
    \n~ 1 hour
  2. \n
\n\n

If you have a current Mac running OS X then see: Installation for Mac OS X

\n\n
    \n
  1. Linux
  2. \n
\n\n

The recommend Linux system tends to be Ubuntu 14.04 LTS (Download page).

\n\n

a. Virtual Machine - Hardware Virtualization - Full Virtualization
\n ~ 3 hours

\n\n

Download and install a virtual machine such as the commercial VMware or the free Virtual Box, after which you can install Linux and then install TensorFlow.

\n\n

When you go to install TensorFlow you will be using Pip - Python's package management system. Visual Studio users should think NuGet. The packages are known as wheels.

\n\n

See: Pip Installation

\n\n

If you need to build from the source then see: Installing From Sources
\n~ 4 hours

\n\n

Note: If you plan on using a Virtual Machine and have never done so before, consider using the Docker option instead, since Docker is the Virtual Machine, OS and TensorFlow all packaged together.

\n\n

b. Dual boot
\n ~ 3 hours

\n\n

If you want to run TensorFlow on the same machine that you have Windows and make use of the GPU version then you will most likely have to use this option as running on a hosted Virtual Machine, Type 2 hypervisor, will not allow you access to the GPU.

\n\n
    \n
  1. Remote machine
    \n~ 4 hours
  2. \n
\n\n

If you have remote access to another machine that you can install the Linux OS and TensorFlow software on and allow remote connections to, then you can use your Windows machine to present the remote machine as an application running on Windows.

\n\n
    \n
  1. Cloud Service
    \nI have no experience with this. Please edit answer if you know.
  2. \n
\n\n

Cloud services such as AWS are being used.

\n\n

From TensorFlow Features

\n\n
\n

Want to run the model as a service in the cloud?\n Containerize with Docker and TensorFlow just works.

\n
\n\n

From Docker

\n\n
\n

Running Docker on AWS provides a highly reliable, low-cost way to\n quickly build, ship, and run distributed applications at scale. Deploy\n Docker using AMIs from the AWS Marketplace.

\n
\n\n
    \n
  1. Wait for Bazel to work on Windows.
  2. \n
\n\n

Currently it appears the only hold up is Bazel, however Bazel's roadmap list working on Windows should be available this year.

\n\n

There are two features listed for Windows:

\n\n
2016\u201102  Bazel can bootstrap itself on Windows without requiring admin privileges.  \n\n2016\u201112  Full Windows support for Android: Android feature set is identical for Windows and Linux/OS X.\n
\n\n
    \n
  1. Build TensorFlow by hand.
    \nA few days or more depending on you skill level. I gave up on this one; too many subprojects to build and files to locate.
  2. \n
\n\n

Remember that Bazel is only used to build TensorFlow. If you get the commands Bazel runs and the correct source code and libraries you should be able to build TensorFlow on Windows. See: How do I get the commands executed by Bazel.

\n\n

While I have not researched this more, you can look at the continuous integration info for needed files and info on how to they build it for testing. (Readme) (site)

\n\n
    \n
  1. Build Bazel on Windows
    \nA few days or more depending on you skill level. I gave up on this one also; could not find the necessary source files needed for Windows.
  2. \n
\n\n

There is a public experimental source code version of Bazel that boots on Windows. You may be able to leverage this into getting Bazel to work on Windows, etc.

\n\n

Also these solutions require the use of Cygwin or MinGW which adds another layer of complexity.

\n\n
    \n
  1. Use alternative build system such as Make
    \nIf you get this one to work I would like to see in on GitHub.
  2. \n
\n\n

This currently does not exist for TensorFlow. It is a feature request.

\n\n

See: TensorFlow issue 380

\n\n
    \n
  1. Cross Build
    \nIf you get this one to work I would like to see in on GitHub.
  2. \n
\n\n

You build TensorFlow on Linux using Bazel but change the build process to output a wheel that can be installed on Windows. This will require detailed knowledge of Bazel to change the configuration, and locating the source code and libraries that work with Windows. An option I would only suggest as a last resort. It may not even be possible.

\n\n
    \n
  1. Run on the new Windows Subsystem for Linux.
  2. \n
\n\n

See: Windows Subsystem for Linux Overview

\n\n

You will know as much as I do by reading the referenced article.

\n\n
\n

Can I use Bazel for Windows for production use?

\n
\n\n

Since it is experimental software I would not use on a production machine.

\n\n

Remember that you only need Bazel to build TensorFlow. So use the experimental code on a non production machine to build the wheel, then install the wheel on a production machine. See: Pip Installation

\n\n

TLDR;

\n\n

Currently I have several versions for learning. Most use a VMWare 7.1 Workstation to host Ubuntu 14.04 LTS or Ubuntu 15 or Debian. I also have one dual boot of Ubuntu 14.04 LTS on my Windows machine to access the GPU as the machine with VMware does not have the proper GPU. I would recommend that you give these machines at least 8G of memory either as RAM or RAM and swap space as I have run out of memory a few times.

\n", + "system": "" + }, + { + "instruction": "FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated", + "input": "", + "output": "

This might or might not be your case, but the same warning is also spit out from h5py package:

\n\n
\n

/home/user/bin/conda3/lib/python3.6/site-packages/h5py/__init__.py:34:\n FutureWarning: Conversion of the second argument of issubdtype from\n float to np.floating is deprecated. In future, it will be treated\n as np.float64 == np.dtype(float).type. from ._conv import\n register_converters as _register_converters

\n
\n\n

For anyone coming here with this problem, it is a known h5py issue, introduced with numpy 1.14. As stated by the devs:

\n\n
\n

You can ignore the warning, it's not going to cause any issues at the\n moment, but you should upgrade to the next release of h5py when it\n becomes available.

\n
\n\n

... so it's harmless. The fix has just been merged to master. But until the update is released, the workaround is to downgrade numpy to a previous version:

\n\n
pip install numpy==1.13.0\n
\n\n

Update: h5py has released the RC build with the fix. The following command should do it:

\n\n
pip install h5py==2.8.0rc1\n
\n\n

Update (FINAL): there's a full-fledged release now. So you can simply run:

\n\n
pip install --upgrade h5py\n
\n", + "system": "" + }, + { + "instruction": "How to downgrade tensorflow, multiple versions possible?", + "input": "", + "output": "

Pip allows to specify the version

\n\n

pip install tensorflow==1.1

\n", + "system": "" + }, + { + "instruction": "Prevent TensorFlow from accessing the GPU?", + "input": "", + "output": "

Have a look to this question or this answer.

\n\n

To summarise you can add this piece of code:

\n\n
import os\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"-1\"\nimport tensorflow as tf\n
\n\n

Playing with the CUDA_VISIBLE_DEVICES environment variable is one of if not the way to go whenever you have GPU-tensorflow installed and you don't want to use any GPUs.

\n\n
\n

You to want either export CUDA_VISIBLE_DEVICES= or alternatively use a virtualenv with a non-GPU installation of TensorFlow.

\n
\n", + "system": "" + }, + { + "instruction": "No module named tensorflow in jupyter", + "input": "", + "output": "

If you installed a TensorFlow as it said in official documentation: https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html#overview

\n\n

I mean creating an environment called tensorflow and tested your installation in python, but TensorFlow can not be imported in jupyter, you have to install jupyter in your tensorflow environment too:

\n\n
conda install jupyter notebook\n
\n\n

After that I run a jupyter and it can import TensorFlow too:

\n\n
jupyter notebook\n
\n", + "system": "" + }, + { + "instruction": "How to get the dimensions of a tensor (in TensorFlow) at graph construction time?", + "input": "", + "output": "

I see most people confused about tf.shape(tensor) and tensor.get_shape()\nLet's make it clear:

\n\n
    \n
  1. tf.shape
  2. \n
\n\n

tf.shape is used for dynamic shape. If your tensor's shape is changable, use it. \nAn example: a input is an image with changable width and height, we want resize it to half of its size, then we can write something like:
\nnew_height = tf.shape(image)[0] / 2

\n\n
    \n
  1. tensor.get_shape
  2. \n
\n\n

tensor.get_shape is used for fixed shapes, which means the tensor's shape can be deduced in the graph.

\n\n

Conclusion:\ntf.shape can be used almost anywhere, but t.get_shape only for shapes can be deduced from graph.

\n", + "system": "" + }, + { + "instruction": "How to convert numpy arrays to standard TensorFlow format?", + "input": "", + "output": "

You can use tf.convert_to_tensor():

\n\n
import tensorflow as tf\nimport numpy as np\n\ndata = [[1,2,3],[4,5,6]]\ndata_np = np.asarray(data, np.float32)\n\ndata_tf = tf.convert_to_tensor(data_np, np.float32)\n\nsess = tf.InteractiveSession()  \nprint(data_tf.eval())\n\nsess.close()\n
\n\n

Here's a link to the documentation for this method:

\n\n

https://www.tensorflow.org/api_docs/python/tf/convert_to_tensor

\n", + "system": "" + }, + { + "instruction": "How to write a custom loss function in Tensorflow?", + "input": "", + "output": "

We need to write down the loss function. For example, we can use basic mean square error as our loss function for predicted y and target y_:

\n
 loss_mse = 1/n(Sum((y-y_)^2))\n
\n

There are basic functions for tensors like tf.add(x,y), tf.sub(x,y), tf.square(x), tf.reduce_sum(x), etc.

\n

Then we can define our loss function in Tensorflow like:

\n
cost = tf.reduce_mean(tf.square(tf.sub(y,y_)))\n
\n

Note: y and y_ are tensors.

\n

Moreover, we can define any other loss functions if we can write down the equations. For some training operators (minimizers), the loss function should satisfy some conditions (smooth, differentiable ...).

\n

In one word, Tensorflow define arrays, constants, variables into tensors, define calculations using tf functions, and use session to run though graph. We can define whatever we like and run it in the end.

\n", + "system": "" + }, + { + "instruction": "Count number of "True" values in boolean Tensor", + "input": "", + "output": "

You can cast the values to floats and compute the sum on them:\ntf.reduce_sum(tf.cast(myOtherTensor, tf.float32))

\n\n

Depending on your actual use case you can also compute sums per row/column if you specify the reduce dimensions of the call.

\n", + "system": "" + }, + { + "instruction": "module 'tensorflow' has no attribute 'logging'", + "input": "", + "output": "

tf.logging was for Logging and Summary Operations and in TF 2.0 it has been removed in favor of the open-source absl-py, and to make the main tf.* namespace has functions that will be used more often.

\n\n

In TF.2 lesser used functions are gone or moved into sub-packages like tf.math

\n\n

So instead of tf.logging you could:

\n\n\n", + "system": "" + }, + { + "instruction": "Can't save custom subclassed model", + "input": "", + "output": "

TensorFlow 2.2

\n\n

Thanks for @cal for noticing me that the new TensorFlow has supported saving the custom models!

\n\n
\n

By using model.save to save the whole model and by using load_model to restore previously stored subclassed model. The following code snippets describe how to implement them.

\n
\n\n
class ThreeLayerMLP(keras.Model):\n\n  def __init__(self, name=None):\n    super(ThreeLayerMLP, self).__init__(name=name)\n    self.dense_1 = layers.Dense(64, activation='relu', name='dense_1')\n    self.dense_2 = layers.Dense(64, activation='relu', name='dense_2')\n    self.pred_layer = layers.Dense(10, name='predictions')\n\n  def call(self, inputs):\n    x = self.dense_1(inputs)\n    x = self.dense_2(x)\n    return self.pred_layer(x)\n\ndef get_model():\n  return ThreeLayerMLP(name='3_layer_mlp')\n\nmodel = get_model()\n# Save the model\nmodel.save('path_to_my_model',save_format='tf')\n\n# Recreate the exact same model purely from the file\nnew_model = keras.models.load_model('path_to_my_model')\n
\n\n

See: Save and serialize models with Keras - Part II: Saving and Loading of Subclassed Models

\n\n

TensorFlow 2.0

\n\n

TL;DR:

\n\n
    \n
  1. do not use model.save() for custom subclass keras model;
  2. \n
  3. use save_weights() and load_weights() instead.
  4. \n
\n\n
\n\n

With the help of the Tensorflow Team, it turns out the best practice of saving a Custom Sub-Class Keras Model is to save its weights and load it back when needed.

\n\n

The reason that we can not simply save a Keras custom subclass model is that it contains custom codes, which can not be serialized safely. However, the weights can be saved/loaded when we have the same model structure and custom codes without any problem.

\n\n

There has a great tutorial written by Francois Chollet who is the author of Keras, for how to save/load Sequential/Functional/Keras/Custom Sub-Class Models in Tensorflow 2.0 in Colab at here. In Saving Subclassed Models section, it said that:

\n\n
\n

Sequential models and Functional models are datastructures that represent a DAG of layers. As such, they can be safely serialized and deserialized.

\n \n

A subclassed model differs in that it's not a datastructure, it's a\n piece of code. The architecture of the model is defined via the body\n of the call method. This means that the architecture of the model\n cannot be safely serialized. To load a model, you'll need to have\n access to the code that created it (the code of the model subclass).\n Alternatively, you could be serializing this code as bytecode (e.g.\n via pickling), but that's unsafe and generally not portable.

\n
\n", + "system": "" + }, + { + "instruction": "Does ImageDataGenerator add more images to my dataset?", + "input": "", + "output": "

Short answer: 1) All the original images are just transformed (i.e. rotation, zooming, etc.) every epoch and then used for training, and 2) [Therefore] the number of images in each epoch is equal to the number of original images you have.

\n

Long answer: In each epoch, the ImageDataGenerator applies a transformation on the images you have and use the transformed images for training. The set of transformations includes rotation, zooming, etc. By doing this you're somehow creating new data (i.e. also called data augmentation), but obviously the generated images are not totally different from the original ones. This way the learned model may be more robust and accurate as it is trained on different variations of the same image.

\n

You need to set the steps_per_epoch argument of fit method to n_samples / batch_size, where n_samples is the total number of training data you have (i.e. 1000 in your case). This way in each epoch, each training sample is augmented only one time and therefore 1000 transformed images will be generated in each epoch.

\n

Further, I think it's worth clarifying the meaning of "augmentation" in this context: basically we are augmenting the images when we use ImageDataGenerator and enabling its augmentation capabilities. But the word "augmentation" here does not mean, say, if we have 100 original training images we end up having 1000 images per epoch after augmentation (i.e. the number of training images does not increase per epoch). Instead, it means we use a different transformation of each image in each epoch; hence, if we train our model for, say, 5 epochs, we have used 5 different versions of each original image in training (or 100 * 5 = 500 different images in the whole training, instead of using just the 100 original images in the whole training). To put it differently, the total number of unique images increases in the whole training from start to finish, and not per epoch.

\n", + "system": "" + }, + { + "instruction": "Make a custom loss function in keras", + "input": "", + "output": "

There are two steps in implementing a parameterized custom loss function in Keras. First, writing a method for the coefficient/metric. Second, writing a wrapper function to format things the way Keras needs them to be.

\n\n
    \n
  1. It's actually quite a bit cleaner to use the Keras backend instead of tensorflow directly for simple custom loss functions like DICE. Here's an example of the coefficient implemented that way:

    \n\n
    import keras.backend as K\ndef dice_coef(y_true, y_pred, smooth, thresh):\n    y_pred = y_pred > thresh\n    y_true_f = K.flatten(y_true)\n    y_pred_f = K.flatten(y_pred)\n    intersection = K.sum(y_true_f * y_pred_f)\n\n    return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)\n
  2. \n
  3. Now for the tricky part. Keras loss functions must only take (y_true, y_pred) as parameters. So we need a separate function that returns another function.

    \n\n
    def dice_loss(smooth, thresh):\n  def dice(y_true, y_pred)\n    return -dice_coef(y_true, y_pred, smooth, thresh)\n  return dice\n
  4. \n
\n\n

Finally, you can use it as follows in Keras compile.

\n\n
# build model \nmodel = my_model()\n# get the loss function\nmodel_dice = dice_loss(smooth=1e-5, thresh=0.5)\n# compile model\nmodel.compile(loss=model_dice)\n
\n", + "system": "" + }, + { + "instruction": "Multiple outputs in Keras", + "input": "", + "output": "
from keras.models import Model\nfrom keras.layers import *    \n\n#inp is a \"tensor\", that can be passed when calling other layers to produce an output \ninp = Input((10,)) #supposing you have ten numeric values as input \n\n\n#here, SomeLayer() is defining a layer, \n#and calling it with (inp) produces the output tensor x\nx = SomeLayer(blablabla)(inp) \nx = SomeOtherLayer(blablabla)(x) #here, I just replace x, because this intermediate output is not interesting to keep\n\n\n#here, I want to keep the two different outputs for defining the model\n#notice that both left and right are called with the same input x, creating a fork\nout1 = LeftSideLastLayer(balbalba)(x)    \nout2 = RightSideLastLayer(banblabala)(x)\n\n\n#here, you define which path you will follow in the graph you've drawn with layers\n#notice the two outputs passed in a list, telling the model I want it to have two outputs.\nmodel = Model(inp, [out1,out2])\nmodel.compile(optimizer = ...., loss = ....) #loss can be one for both sides or a list with different loss functions for out1 and out2    \n\nmodel.fit(inputData,[outputYLeft, outputYRight], epochs=..., batch_size=...)\n
\n", + "system": "" + }, + { + "instruction": "Keras - stateful vs stateless LSTMs", + "input": "", + "output": "

I recommend you to firstly learn the concepts of BPTT (Back Propagation Through Time) and mini-batch SGD(Stochastic Gradient Descent), then you'll have further understandings of LSTM's training procedure.

\n\n

For your questions,

\n\n

Q1. In stateless cases, LSTM updates parameters on batch1 and then, initiate hidden states and cell states (usually all zeros) for batch2, while in stateful cases, it uses batch1's last output hidden states and cell sates as initial states for batch2.

\n\n

Q2. As you can see above, when two sequences in two batches have connections (e.g. prices of one stock), you'd better use stateful mode, else (e.g. one sequence represents a complete sentence) you should use stateless mode.

\n\n

BTW, @vu.pham said if we use stateful RNN, then in production, the network is forced to deal with infinite long sequences. This seems not correct, actually, as you can see in Q1, LSTM WON'T learn on the whole sequence, it first learns sequence in batch1, updates parameters, and then learn sequence on batch2.

\n", + "system": "" + }, + { + "instruction": "Tensorflow - matmul of input matrix with batch data", + "input": "", + "output": "

Previous answers are obsolete. Currently tf.matmul() support tensors with rank > 2:

\n\n
\n

The inputs must be matrices (or tensors of rank > 2, representing\n batches of matrices), with matching inner dimensions, possibly after\n transposition.

\n
\n\n

Also tf.batch_matmul() was removed and tf.matmul() is the right way to do batch multiplication. The main idea can be understood from the following code:

\n\n
import tensorflow as tf\nbatch_size, n, m, k = 10, 3, 5, 2\nA = tf.Variable(tf.random_normal(shape=(batch_size, n, m)))\nB = tf.Variable(tf.random_normal(shape=(batch_size, m, k)))\ntf.matmul(A, B)\n
\n\n

Now you will receive a tensor of the shape (batch_size, n, k). Here is what is going on here. Assume you have batch_size of matrices nxm and batch_size of matrices mxk. Now for each pair of them you calculate nxm X mxk which gives you an nxk matrix. You will have batch_size of them.

\n\n

Notice that something like this is also valid:

\n\n
A = tf.Variable(tf.random_normal(shape=(a, b, n, m)))\nB = tf.Variable(tf.random_normal(shape=(a, b, m, k)))\ntf.matmul(A, B)\n
\n\n

and will give you a shape (a, b, n, k)

\n", + "system": "" + }, + { + "instruction": "How do I swap tensor's axes in TensorFlow?", + "input": "", + "output": "

tf.transpose provides the same functionality as np.swapaxes, although in a more generalized form. In your case, you can do tf.transpose(orig_tensor, [1, 0, 2]) which would be equivalent to np.swapaxes(orig_np_array, 0, 1).

\n", + "system": "" + }, + { + "instruction": "How do I get the gradient of the loss at a TensorFlow variable?", + "input": "", + "output": "

The tf.gradients() function allows you to compute the symbolic gradient of one tensor with respect to one or more other tensors—including variables. Consider the following simple example:

\n\n
data = tf.placeholder(tf.float32)\nvar = tf.Variable(...)              # Must be a tf.float32 or tf.float64 variable.\nloss = some_function_of(var, data)  # some_function_of() returns a `Tensor`.\n\nvar_grad = tf.gradients(loss, [var])[0]\n
\n\n

You can then use this symbolic gradient to evaluate the gradient in some specific point (data):

\n\n
sess = tf.Session()\n\nvar_grad_val = sess.run(var_grad, feed_dict={data: ...})\n
\n", + "system": "" + }, + { + "instruction": "In TensorFlow is there any way to just initialize uninitialised variables?", + "input": "", + "output": "

There is no elegant* way to enumerate the uninitialized variables in a graph. However, if you have access to the new variable objects—let's call them v_6, v_7, and v_8—you can selectively initialize them using tf.initialize_variables():

\n\n
init_new_vars_op = tf.initialize_variables([v_6, v_7, v_8])\nsess.run(init_new_vars_op)\n
\n\n
\n\n

* A process of trial and error could be used to identify the uninitialized variables, as follows:

\n\n
uninitialized_vars = []\nfor var in tf.all_variables():\n    try:\n        sess.run(var)\n    except tf.errors.FailedPreconditionError:\n        uninitialized_vars.append(var)\n\ninit_new_vars_op = tf.initialize_variables(uninitialized_vars)\n# ...\n
\n\n

...however, I would not condone such behavior :-).

\n", + "system": "" + }, + { + "instruction": ""zsh: illegal hardware instruction python" when installing Tensorflow on macbook pro M1", + "input": "", + "output": "

This worked for me after trying a bunch of solutions to no avail.

\n

Step 1 Using pyenv install python version 3.8.5 and set it as your default python version. This tutorial(https://realpython.com/intro-to-pyenv/) is helpful for\ngetting pyenv configured properly.

\n

Step 1.1 Use this post(https://github.com/pyenv/pyenv/issues/1446) if you have troubles running pyenv in zsh.

\n

Step 1.2 Once you have python version 3.8.5 running which you can check by running python -V which should output:

\n
Python 3.8.5\n
\n

Step 2 Install virtualenv via pip install virtualenv

\n

Step 2.1 Create a virtual environment by running virtualenv ENV

\n

Step 2.2 Activate that virtual environment by running source ENV/bin/activate

\n

Step 3 Install the tensorflow wheel called tensorflow-2.4.1-py3-none-any.whl located at this public google drive link https://drive.google.com/drive/folders/1oSipZLnoeQB0Awz8U68KYeCPsULy_dQ7

\n

Step 3.1 Assuming you simply installed the wheel to downloads run\npip install ~/Downloads/tensorflow-2.4.1-py3-none-any.whl in your activated virtual environment

\n

Step 4 Type python which will bring up >>>in your terminal and type

\n
>>> import tensorflow\n>>>\n
\n

If there is no 'zsh illegal hardware instruction" error you should be good to go.

\n

Note: If you are using anaconda, the above will also work. You can skip the virtual env steps (assuming you have a virtual env activated through Conda) and just go straight to the pip install as mentioned above (steps 3 and later).

\n", + "system": "" + }, + { + "instruction": "Xcode version must be specified to use an Apple CROSSTOOL", + "input": "", + "output": "
bazel clean --expunge \nsudo xcode-select -s /Applications/Xcode.app/Contents/Developer\nsudo xcodebuild -license\nbazel clean --expunge \nbazel build --config=opt //tensorflow/tools/pip_package:build_pip_package\n
\n", + "system": "" + }, + { + "instruction": "could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR", + "input": "", + "output": "

In Tensorflow 2.0, my issue was resolved by setting the memory growth. ConfigProto is deprecated in TF 2.0, I used tf.config.experimental. My computer specs are:

\n\n\n\n

The code I used was:

\n\n
physical_devices = tf.config.experimental.list_physical_devices('GPU')\nassert len(physical_devices) > 0, \"Not enough GPU hardware devices available\"\nconfig = tf.config.experimental.set_memory_growth(physical_devices[0], True)\n
\n", + "system": "" + }, + { + "instruction": "Keras + Tensorflow and Multiprocessing in Python", + "input": "", + "output": "

From my experience - the problem lies in loading Keras to one process and then spawning a new process when the keras has been loaded to your main environment. But for some applications (like e.g. training a mixture of Kerasmodels) it's simply better to have all of this things in one process. So what I advise is the following (a little bit cumbersome - but working for me) approach:

\n\n
    \n
  1. DO NOT LOAD KERAS TO YOUR MAIN ENVIRONMENT. If you want to load Keras / Theano / TensorFlow do it only in the function environment. E.g. don't do this:

    \n\n
    import keras\n\ndef training_function(...):\n    ...\n
    \n\n

    but do the following:

    \n\n
    def training_function(...):\n    import keras\n    ...\n
  2. \n
  3. Run work connected with each model in a separate process: I'm usually creating workers which are making the job (like e.g. training, tuning, scoring) and I'm running them in separate processes. What is nice about it that whole memory used by this process is completely freed when your process is done. This helps you with loads of memory problems which you usually come across when you are using multiprocessing or even running multiple models in one process. So this looks e.g. like this:

    \n\n
    def _training_worker(train_params):\n    import keras\n    model = obtain_model(train_params)\n    model.fit(train_params)\n    send_message_to_main_process(...)\n\ndef train_new_model(train_params):\n    training_process = multiprocessing.Process(target=_training_worker, args = train_params)\n    training_process.start()\n    get_message_from_training_process(...)\n    training_process.join()\n
  4. \n
\n\n

Different approach is simply preparing different scripts for different model actions. But this may cause memory errors especially when your models are memory consuming. NOTE that due to this reason it's better to make your execution strictly sequential.

\n", + "system": "" + }, + { + "instruction": "Using sparse matrices with Keras and Tensorflow", + "input": "", + "output": "

Sorry, don't have the reputation to comment, but I think you should take a look at the answer here: Keras, sparse matrix issue. I have tried it and it works correctly, just one note though, at least in my case, the shuffling led to really bad results, so I used this slightly modified non-shuffled alternative:

\n\n
def nn_batch_generator(X_data, y_data, batch_size):\n    samples_per_epoch = X_data.shape[0]\n    number_of_batches = samples_per_epoch/batch_size\n    counter=0\n    index = np.arange(np.shape(y_data)[0])\n    while 1:\n        index_batch = index[batch_size*counter:batch_size*(counter+1)]\n        X_batch = X_data[index_batch,:].todense()\n        y_batch = y_data[index_batch]\n        counter += 1\n        yield np.array(X_batch),y_batch\n        if (counter > number_of_batches):\n            counter=0\n
\n\n

It produces comparable accuracies to the ones achieved by keras's shuffled implementation (setting shuffle=True in fit).

\n", + "system": "" + }, + { + "instruction": "Is there an example on how to generate protobuf files holding trained TensorFlow graphs", + "input": "", + "output": "

EDIT: The freeze_graph.py script, which is part of the TensorFlow repository, now serves as a tool that generates a protocol buffer representing a \"frozen\" trained model, from an existing TensorFlow GraphDef and a saved checkpoint. It uses the same steps as described below, but it much easier to use.

\n\n
\n\n

Currently the process isn't very well documented (and subject to refinement), but the approximate steps are as follows:

\n\n
    \n
  1. Build and train your model as a tf.Graph called g_1.
  2. \n
  3. Fetch the final values of each of the variables and store them as numpy arrays (using Session.run()).
  4. \n
  5. In a new tf.Graph called g_2, create tf.constant() tensors for each of the variables, using the value of the corresponding numpy array fetched in step 2.
  6. \n
  7. Use tf.import_graph_def() to copy nodes from g_1 into g_2, and use the input_map argument to replace each variable in g_1 with the corresponding tf.constant() tensors created in step 3. You may also want to use input_map to specify a new input tensor (e.g. replacing an input pipeline with a tf.placeholder()). Use the return_elements argument to specify the name of the predicted output tensor.

  8. \n
  9. Call g_2.as_graph_def() to get a protocol buffer representation of the graph.

  10. \n
\n\n

(NOTE: The generated graph will have extra nodes in the graph for training. Although it is not part of the public API, you may wish to use the internal graph_util.extract_sub_graph() function to strip these nodes from the graph.)

\n", + "system": "" + }, + { + "instruction": "what is XLA_GPU and XLA_CPU for tensorflow", + "input": "", + "output": "

As mentioned in the docs, XLA stands for \"accelerated linear algebra\". It's Tensorflow's relatively new optimizing compiler that can further speed up your ML models' GPU operations by combining what used to be multiple CUDA kernels into one (simplifying because this isn't that important for your question).

\n\n

To your question, my understanding is that XLA is separate enough from the default Tensorflow compiler that they separately register GPU devices and have slightly different constraints on which GPUs they treat as visible (see here for more on this). Looking at the output of the command you ran, it looks like XLA is registering 1 GPU and normal TF is registering 3.

\n\n

I'm not sure if you're having issues or are just curious, but if it's the former, I recommend taking a look at the issue I linked above and this one. Tensorflow's finicky about which CUDA/cuDNN versions with which it works flawlessly and it's possible you're using incompatible versions. (If you're not having issues, then hopefully the first part of my answer is sufficient.)

\n", + "system": "" + }, + { + "instruction": "ValueError: Can not squeeze dim[1], expected a dimension of 1, got 3 for 'sparse_softmax_cross_entropy_loss", + "input": "", + "output": "

The error here is from tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits).

\n\n

The TensorFlow documentation clearly states that \"labels vector must provide a single specific index for the true class for each row of logits\". So your labels vector must include only class-indices like 0,1,2 and not their respective one-hot-encodings like [1,0,0], [0,1,0], [0,0,1].

\n\n

Reproducing the error to explain further:

\n\n
import numpy as np\nimport tensorflow as tf\n\n# Create random-array and assign as logits tensor\nnp.random.seed(12345)\nlogits = tf.convert_to_tensor(np.random.sample((4,4)))\nprint logits.get_shape() #[4,4]\n\n# Create random-labels (Assuming only 4 classes)\nlabels = tf.convert_to_tensor(np.array([2, 2, 0, 1]))\n\nloss_1 = tf.losses.sparse_softmax_cross_entropy(labels, logits)\n\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())\n\nprint 'Loss: {}'.format(sess.run(loss_1)) # 1.44836854\n\n# Now giving one-hot-encodings in place of class-indices for labels\nwrong_labels = tf.convert_to_tensor(np.array([[0,0,1,0], [0,0,1,0], [1,0,0,0],[0,1,0,0]]))\nloss_2 = tf.losses.sparse_softmax_cross_entropy(wrong_labels, logits)\n\n# This should give you a similar error as soon as you define it\n
\n\n

So try giving class-indices instead of one-hot encodings in your Y_Labels vector.\nHope this clears your doubt.

\n", + "system": "" + }, + { + "instruction": "How to understand loss acc val_loss val_acc in Keras model fitting", + "input": "", + "output": "

Answering your questions:

\n
    \n
  1. As described on official keras FAQ
  2. \n
\n
\n

the training loss is the average of the losses over each batch of training data. Because your model is changing over time, the loss over the first batches of an epoch is generally higher than over the last batches. On the other hand, the testing loss for an epoch is computed using the model as it is at the end of the epoch, resulting in a lower loss.

\n
\n
    \n
  1. Training should be stopped when val_acc stops increasing, otherwise your model will probably overffit. You can use earlystopping callback to stop training.

    \n
  2. \n
  3. Your model seems to achieve very good results. Keep up the good work.

    \n
  4. \n
\n", + "system": "" + }, + { + "instruction": "Tensorflow serving No versions of servable <MODEL> found under base path", + "input": "", + "output": "

I had same problem, the reason is because object detection api does not assign version of your model when exporting your detection model. However, tensorflow serving requires you to assign a version number of your detection model, so that you could choose different versions of your models to serve. In your case, you should put your detection model(.pb file and variables folder) under folder:\n/serving/ssd_frozen/1/. In this way, you will assign your model to version 1, and tensorflow serving will automatically load this version since you only have one version. By default tensorflow serving will automatically serve the latest version(ie, the largest number of versions).

\n\n

Note, after you created 1/ folder, the model_base_path is still need to be set to --model_base_path=/serving/ssd_frozen/.

\n", + "system": "" + }, + { + "instruction": "How to interpret increase in both loss and accuracy", + "input": "", + "output": "

The loss decreases as the training process goes on, except for some fluctuation introduced by the mini-batch gradient descent and/or regularization techniques like dropout (that introduces random noise).

\n\n

If the loss decreases, the training process is going well.

\n\n

The (validation I suppose) accuracy, instead, it's a measure of how good the predictions of your model are.

\n\n

If the model is learning, the accuracy increases. If the model is overfitting, instead, the accuracy stops to increase and can even start to decrease.

\n\n

If the loss decreases and the accuracy decreases, your model is overfitting.

\n\n

If the loss increases and the accuracy increase too is because your regularization techniques are working well and you're fighting the overfitting problem. This is true only if the loss, then, starts to decrease whilst the accuracy continues to increase.\nOtherwise, if the loss keep growing your model is diverging and you should look for the cause (usually you're using a too high learning rate value).

\n", + "system": "" + }, + { + "instruction": "TensorFlow ValueError: Cannot feed value of shape (64, 64, 3) for Tensor u'Placeholder:0', which has shape '(?, 64, 64, 3)'", + "input": "", + "output": "

image has a shape of (64,64,3).

\n

Your input placeholder _x have a shape of (?,64,64,3).

\n

The problem is that you're feeding the placeholder with a value of a different shape.

\n

You have to feed it with a value of (1,64,64,3) = a batch of 1 image.

\n

Just reshape your image value to a batch with size one.

\n
image = array(img).reshape(1,64,64,3)\n
\n
\n

P.S: The fact that the input placeholder accepts a batch of images, means that you can run predicions for a batch of images in parallel.\nYou can try to read more than 1 image (N images) and then build a batch of N images, using a tensor with shape (N,64,64,3)

\n", + "system": "" + } +] \ No newline at end of file