diff --git "a/data.json" "b/data.json" --- "a/data.json" +++ "b/data.json" @@ -3304,5 +3304,4961 @@ "input": "", "output": "

len(dataloader) returns the total number of batches. It depends on the __len__ function of your dataset, so make sure it is set correctly.

\n", "system": "" + }, + { + "instruction": "What does TensorFlow's `conv2d_transpose()` operation do?", + "input": "", + "output": "

This is the best explanation I've seen online how convolution transpose works is here.

\n\n

I'll give my own short description. It applies convolution with a fractional stride. In other words spacing out the input values (with zeroes) to apply the filter over a region that's potentially smaller than the filter size.

\n\n

As for the why one would want to use it. It can be used as a sort of upsampling with learned weights as opposed to bilinear interpolation or some other fixed form of upsampling.

\n", + "system": "" + }, + { + "instruction": "How can I visualize the weights(variables) in cnn in Tensorflow?", + "input": "", + "output": "

To visualize the weights, you can use a tf.image_summary() op to transform a convolutional filter (or a slice of a filter) into a summary proto, write them to a log using a tf.train.SummaryWriter, and visualize the log using TensorBoard.

\n\n

Let's say you have the following (simplified) program:

\n\n
filter = tf.Variable(tf.truncated_normal([8, 8, 3]))\nimages = tf.placeholder(tf.float32, shape=[None, 28, 28])\n\nconv = tf.nn.conv2d(images, filter, strides=[1, 1, 1, 1], padding=\"SAME\")\n\n# More ops...\nloss = ...\noptimizer = tf.GradientDescentOptimizer(0.01)\ntrain_op = optimizer.minimize(loss)\n\nfilter_summary = tf.image_summary(filter)\n\nsess = tf.Session()\nsummary_writer = tf.train.SummaryWriter('/tmp/logs', sess.graph_def)\nfor i in range(10000):\n  sess.run(train_op)\n  if i % 10 == 0:\n    # Log a summary every 10 steps.\n    summary_writer.add_summary(filter_summary, i)\n
\n\n

After doing this, you can start TensorBoard to visualize the logs in /tmp/logs, and you will be able to see a visualization of the filter.

\n\n

Note that this trick visualizes depth-3 filters as RGB images (to match the channels of the input image). If you have deeper filters, or they don't make sense to interpret as color channels, you can use the tf.split() op to split the filter on the depth dimension, and generate one image summary per depth.

\n", + "system": "" + }, + { + "instruction": "Error while importing Tensorflow in Python 2.7 in Ubuntu 12.04. 'GLIBC_2.17 not found'", + "input": "", + "output": "

I've just managed to install tensorflow 0.12rc0 on CentOS 6.5 with glibc 2.12, without having root privileges. Simply installing tensorflow binary via pip was giving me an error, related to GLIBC version as well.

\n\n

Basically, you have 4 options how to deal with this (each with some advantages and disadvantages):

\n\n

Option 1 - Upgrade your system GLIBC globally.

\n\n

This is, probably, the best option, if your system supports this, you have root privileges, and you are confident that this upgrade won't break anything for some weird reason. Ultimately, this goes up to upgrading the whole Linux distribution. Here's a nice short list of default GLIBC versions on popular distributions.

\n\n

Option 2 - Add second GLIBC to your system

\n\n

Compile or download binary. The most simple&straightforward option. Especially if you only need to run few simple scripts.

\n\n\n\n

Option 3 - Patch tensorflow

\n\n

This may work for TF 0.6.0, but you would probably have to start again from scratch, when each new tensorflow version is released. E.g. here's a fix for 0.9.0.

\n\n

Option 4 - Compile tensorflow from source

\n\n

If you re-compile it from source and link against your existing GLIBC, newer GLIBC would be no longer needed. Somehow, this option was not mentioned in any answer here yet. Imho, this is the best option, both \"in general\", and \"specifically for tensorflow\".

\n\n\n\n

A quick summary of \"building tensorflow on outdated system\":

\n\n

Although the official guide provides a \"installing from sources\" section, there are few tricks you need to do to build it on an outdated system. Here I assume, that you do not have root privileges (if you do - you probably would be able to install the same pre-requestities with a package manager, rather them manually building them from source).

\n\n

I found two well-documented success stories: #1, #2 and a number of useful posts on the official github (mostly about a set of libraries to link inside the binary): #1, #2, #3, #4. I had to combine tricks, described there to successfully compile TF in my case.

\n\n
    \n
  1. First of all, check your gcc --version, and verify that it supports c++11. Mine was 4.4.7, so it won't work. I've downloaded gcc-4.9.4 source code, and compiled it. This step is pretty straightforward, but the compilation itself may take few hours. As a workaround for an issue in bazel, I've compiled gcc with hardcoded paths to as,ld and nm. However, you may try another workarounds: (1, 2).

    \n\n
    #!/bin/sh\n\nunset LIBRARY_PATH CPATH C_INCLUDE_PATH \nunset PKG_CONFIG_PATH CPLUS_INCLUDE_PATH INCLUDE LD_LIBRARY_PATH\n\ncd gcc-4.9.4\n./contrib/download_prerequisites\n\nmkdir objdir\ncd objdir\n\n\n# I've added --disable-multilib to fix the following error:\n# /usr/bin/ld: crt1.o: No such file: No such file or directory\n# collect2: ld returned 1 exit status\n# configure: error: I suspect your system does not have 32-bit \n# developement libraries (libc and headers). If you have them,\n# rerun configure with --enable-multilib. If you do not have them, \n# and want to build a 64-bit-only compiler, rerun configure \n# with --disable-multilib.           \n\n../configure --prefix=$HOME/opt/gcc-4.9.4 \\\n             --disable-multilib \\\n             --disable-nls \\\n             --enable-languages=c,c++ \\\n             --with-ld=/usr/bin/ld \\\n             --with-nm=/usr/bin/nm \\\n             --with-as=/usr/bin/as\n\nmake        \nmake install\n
  2. \n
  3. Check your java --version. Bazel requires JDK 8, install it if necessary. (They still provide some jdk7 related downloads, for bazel-0.4.1 but it looks like they consider it deprecated)

  4. \n
  5. I've created a separate use_gcc_4.9.4.sh file, with necessary environment variables. I use source ./use_gcc_4.9.4.sh when I need to so something related to this newer compiler.

    \n\n
    #!/bin/sh\nthis=$HOME/opt/gcc-4.9.4\nexport PATH=$this/bin:$PATH\nexport CPATH=$this/include:$CPATH\nexport LIBRARY_PATH=$this/lib:$LIBRARY_PATH\nexport LIBRARY_PATH=$this/lib64:$LIBRARY_PATH\nexport LD_LIBRARY_PATH=$this/lib:$LD_LIBRARY_PATH\nexport LD_LIBRARY_PATH=$this/lib64:$LD_LIBRARY_PATH\n
  6. \n
  7. The current bazel binary (0.4.1) requires GLIBC 2.14, so we have to compile bazel from source as well (with our new gcc). Works OK, unless you are only allowed to run a very limited number of threads on the target machine. (This post describes some additional workarounds, but in my case they were not needed, maybe due to recent updates in bazel code.)

  8. \n
  9. Obtain tensorflow source code git clone https://github.com/tensorflow/tensorflow, and install prerequisites you need (CUDA,cuDNN,python, etc). See official guide.

  10. \n
  11. If you're not using default system gcc (e.g. if you had to compile newer gcc, like discussed above), add the following linker flags to tensorflow/third_party/gpus/crosstool/CROSSTOOL.tpl, line 59:

    \n\n
    linker_flag: \"-L/home/username/localinst/opt/gcc-4.9.4/lib64\"\nlinker_flag: \"-Wl,-rpath,/home/username/localinst/opt/gcc-4.9.4/lib64\"\n
    \n\n

    Without this step, you would likely run into error messages like this:

    \n\n
    # ERROR: /home/username/localdistr/src/tensorflow/tensorflow/tensorflow/core/debug/BUILD:33:1: null failed: protoc failed: error executing command bazel-out/host/bin/external/protobuf/protoc '--cpp_out=bazel-out/local_linux-py3-opt/genfiles/' '--plugin=protoc-gen-grpc=bazel-out/host/bin/external/grpc/grpc_cpp_plugin' ... (remaining 8 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.\n# bazel-out/host/bin/external/protobuf/protoc: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by bazel-out/host/bin/external/protobuf/protoc)\n# bazel-out/host/bin/external/protobuf/protoc: /usr/lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by bazel-out/host/bin/external/protobuf/protoc)\n# bazel-out/host/bin/external/protobuf/protoc: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.18' not found (required by bazel-out/host/bin/external/protobuf/protoc)\n
  12. \n
  13. Finally, to avoid GLIBC dependencies, we have to statically link some libraries, by adding the -lrt linker flag (maybe -lm as well). I found multiple posts, suggesting to add this in a different manner:

    \n\n\n\n

    Without -lrt I ran into GLIBC-version-specific error again, trying to import tensorflow:

    \n\n
    # ImportError: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /home/username/anaconda3/envs/myenvname/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so)\n
    \n\n

    Without -lm you may run into this (for me, it turned out to be not necessary).

  14. \n
  15. Run the build process.

  16. \n
\n\n
    source ./use_gcc_4.9.4.sh\n    ./configure\n    bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package\n    bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg\n    pip install --upgrade /tmp/tensorflow_pkg/tensorflow-0.12.0rc0-cp35-cp35m-linux_x86_64.whl\n
\n\n
    \n
  1. Try to run the following simple python script to test if the most basic stuff is functioning:

    \n\n
    import tensorflow as tf\nhello = tf.constant('Hello, TensorFlow!')\nsess = tf.Session()\nprint(sess.run(hello))\n\na = tf.constant(10)\nb = tf.constant(32)\nprint(sess.run(a + b))\n
  2. \n
\n", + "system": "" + }, + { + "instruction": "Error importing tensorflow "AlreadyExistsError: Another metric with the same name already exists."", + "input": "", + "output": "

TL;DR: Ensure the Keras version matches the Tensorflow version

\n

I am experiencing the same thing with:

\n\n

The core issue appears to be that there are two Keras packages installed:

\n
<site-packages>/keras\n<site-packages/tensorflow/python/keras\n
\n

If you look at the release notes for 2.6:\nhttps://github.com/tensorflow/tensorflow/releases/tag/v2.6.0

\n
\n

Keras been split into a separate PIP package (keras), and its code has\nbeen moved to the GitHub repositorykeras-team/keras. The API endpoints\nfor tf.keras stay unchanged, but are now backed by the keras PIP\npackage. The existing code in tensorflow/python/keras is a staled copy\nand will be removed in future release (2.7). Please remove any imports\nto tensorflow.python.keras and replace them with public tf.keras API\ninstead.

\n
\n

For some reason, it is still importing from both packages which is triggering the valid exception (only one Keras instance should be imported)

\n

Digging a bit further, it looks like Keras-2.7 was being installed, reverting to Keras-2.6 resolved the issue:

\n
pip install keras==2.6.*\n
\n

For some reason:\nhttps://github.com/tensorflow/tensorflow/blob/v2.6.1/tensorflow/tools/pip_package/setup.py#L106

\n

Is not working, maybe a bug in PIP?

\n", + "system": "" + }, + { + "instruction": "Difference between installation libraries of Tensorflow GPU vs CPU", + "input": "", + "output": "

Updated answer 2023 (Tensorflow 2.x and above:)

\n

Verify the CPU setup:

\n
python3 -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([100, 100])))"\n
\n

If a tensor is returned, you've installed TensorFlow successfully.

\n

Verify the GPU setup:

\n
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"\n
\n

If a list of GPU devices is returned, you've installed TensorFlow successfully.

\n

Source: Tensorflow installation guide

\n

Old answer(Tensorflow 1.x):

\n

One thing to Note: CUDA can be installed even if you don't have a GPU in your system.

\n

For packages tensorflow and tensorflow-gpu I hope this clears the confusion. yes/no means "Will the package work out of the box when executing import tensorflow as tf"? Here are the differences:

\n
| Support for TensorFlow libraries | tensorflow | tensorflow-gpu  |\n| for hardware type:               |    tf      |     tf-gpu      |\n|----------------------------------|------------|-----------------|\n| cpu-only                         |    yes     |   no (~tf-like) |\n| gpu with cuda+cudnn installed    |    yes     |   yes           |\n| gpu without cuda+cudnn installed |    yes     |   no (~tf-like) |\n
\n

Edit: Confirmed the no answers on a cpu-only system and the gpu without cuda+cudnn installed (by removing CUDA+CuDNN env variables).

\n

~tf-like means even though the library is tensorflow-gpu, it would behave like tensorflow library.

\n", + "system": "" + }, + { + "instruction": "Tensorflow : logits and labels must have the same first dimension", + "input": "", + "output": "

The problem is in your target shape and is related to the correct choice of an appropriate loss function. you have 2 possibilities:

\n

1. possibility: if you have 1D integer encoded target, you can use sparse_categorical_crossentropy as loss function

\n
n_class = 3\nn_features = 100\nn_sample = 1000\n\nX = np.random.randint(0,10, (n_sample,n_features))\ny = np.random.randint(0,n_class, n_sample)\n\ninp = Input((n_features,))\nx = Dense(128, activation='relu')(inp)\nout = Dense(n_class, activation='softmax')(x)\n\nmodel = Model(inp, out)\nmodel.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])\nhistory = model.fit(X, y, epochs=3)\n
\n

2. possibility: if you have one-hot encoded your target in order to have 2D shape (n_samples, n_class), you can use categorical_crossentropy

\n
n_class = 3\nn_features = 100\nn_sample = 1000\n\nX = np.random.randint(0,10, (n_sample,n_features))\ny = pd.get_dummies(np.random.randint(0,n_class, n_sample)).values\n\ninp = Input((n_features,))\nx = Dense(128, activation='relu')(inp)\nout = Dense(n_class, activation='softmax')(x)\n\nmodel = Model(inp, out)\nmodel.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])\nhistory = model.fit(X, y, epochs=3)\n
\n", + "system": "" + }, + { + "instruction": "Use shared GPU memory with TensorFlow?", + "input": "", + "output": "

Shared memory is an area of the main system RAM reserved for graphics. References:

\n\n

https://en.wikipedia.org/wiki/Shared_graphics_memory

\n\n

https://www.makeuseof.com/tag/can-shared-graphics-finally-compete-with-a-dedicated-graphics-card/

\n\n

https://youtube.com/watch?v=E5WyJY1zwcQ

\n\n

This type of memory is what integrated graphics eg Intel HD series typically use.

\n\n

This is not on your NVIDIA GPU, and CUDA can't use it. Tensorflow can't use it when running on GPU because CUDA can't use it, and also when running on CPU because it's reserved for graphics.

\n\n

Even if CUDA could use it somehow. It won't be useful because system RAM bandwidth is around 10x less than GPU memory bandwidth, and you have to somehow get the data to and from the GPU over the slow (and high latency) PCIE bus.

\n\n

Bandwidth numbers for reference :\nGeForce GTX 980: 224 GB/s\nDDR4 on desktop motherboard: approx 25GB/s\nPCIe 16x: 16GB/s

\n\n

This doesn't take into account latency. In practice, running a GPU compute task on data which is too big to fit in GPU memory and has to be transferred over PCIe every time it is accessed is so slow for most types of compute that doing the same calculation on CPU would be much faster.

\n\n

Why do you see that kind of memory being allocated when you have a NVIDIA card in your machine? Good question. I can think of a couple of possibilities:

\n\n

(a) You have both NVIDIA and Intel graphics drivers active (eg as happens when running different displays on both). Uninstaller the Intel drivers and/or disable Intel HD graphics in the BIOS and shared memory will disappear.

\n\n

(b) NVIDIA is using it. This may be eg extra texture memory, etc. It could also not be real memory but just a memory mapped area that corresponds to GPU memory. Look in the advanced settings of the NVIDIA driver for a setting that controls this.

\n\n

In any case, no, there isn't anything that Tensorflow can use.

\n", + "system": "" + }, + { + "instruction": "Where should pre-processing and post-processing steps be executed when a TF model is served using TensorFlow serving?", + "input": "", + "output": "

I'm running over the same issue here, even if I'm not 100% sure yet on how to use the wordDict variable (I guess you use one too to map the words with its ids), the main pre-process and post-process functions are defined here:

\n\n

https://www.tensorflow.org/programmers_guide/saved_model

\n\n

as export_outputs and serving_input_receiver_fn.

\n\n\n\n

Needs to be defined in EstimatorSpec if you are using estimators. Here is an example for a classification algorithm

\n\n
  predicted_classes = tf.argmax(logits, 1)\n  categories_tensor = tf.convert_to_tensor(CATEGORIES, tf.string)\n  export_outputs = { \"categories\": export_output.ClassificationOutput(classes=categories_tensor) }\n  if mode == tf.estimator.ModeKeys.PREDICT:\n    return tf.estimator.EstimatorSpec(\n        mode=mode,\n        predictions={\n            'class': predicted_classes,\n            'prob': tf.nn.softmax(logits)\n        },\n        export_outputs=export_outputs)\n
\n\n\n\n

It needs to be defined on before exporting the trained estimator model, it assumes the input is a raw string and parses your input from there, you can write your own function but I'm unsure whenever you can use external variables. Here is a simple example for a classification algorithm:

\n\n
def serving_input_receiver_fn():\n    feature_spec = { \"words\": tf.FixedLenFeature(dtype=tf.int64, shape=[4]) }\n    return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)()\n\n  export_dir = classifier.export_savedmodel(export_dir_base=args.job_dir,\n                                            serving_input_receiver_fn=serving_input_receiver_fn)\n
\n\n

hope it helps.

\n", + "system": "" + }, + { + "instruction": "How to disable printing reports after each epoch in Keras?", + "input": "", + "output": "

Set verbose=0 to the fit method of your model.

\n", + "system": "" + }, + { + "instruction": "What is the relationship between steps and epochs in TensorFlow?", + "input": "", + "output": "

TL;DR: An epoch is when your model goes through your whole training data once. A step is when your model trains on a single batch (or a single sample if you send samples one by one). Training for 5 epochs on a 1000 samples 10 samples per batch will take 500 steps.

\n\n

The contrib.learn.io module is not documented very well, but it seems that numpy_input_fn() function takes some numpy arrays and batches them together as input for a classificator. So, the number of epochs probably means \"how many times to go through the input data I have before stopping\". In this case, they feed two arrays of length 4 in 4 element batches, so it will just mean that the input function will do this at most a 1000 times before raising an \"out of data\" exception. The steps argument in the estimator fit() function is how many times should estimator do the training loop. This particular example is somewhat perverse, so let me make up another one to make things a bit clearer (hopefully).

\n\n

Lets say you have two numpy arrays (samples and labels) that you want to train on. They are a 100 elements each. You want your training to take batches with 10 samples per batch. So after 10 batches you will go through all of your training data. That is one epoch. If you set your input generator to 10 epochs, it will go through your training set 10 times before stopping, that is it will generate at most a 100 batches.

\n\n

Again, the io module is not documented, but considering how other input related APIs in tensorflow work, it should be possible to make it generate data for unlimited number of epochs, so the only thing controlling the length of training are going to be the steps. This gives you some extra flexibility on how you want your training to progress. You can go a number of epochs at a time or a number of steps at a time or both or whatever.

\n", + "system": "" + }, + { + "instruction": "Faster RCNN for TensorFlow", + "input": "", + "output": "

Tensorflow has just released an official Object Detection API here, that can be used for instance with their various slim models.

\n\n

This API contains implementation of various Pipelines for Object Detection, including popular Faster RCNN, with their pre-trained models as well.

\n", + "system": "" + }, + { + "instruction": "Tensor flow toggle between CPU/GPU", + "input": "", + "output": "

To make GPU invisible

\n\n
export CUDA_VISIBLE_DEVICES=\"\"\n
\n\n

To return to normal

\n\n
unset CUDA_VISIBLE_DEVICES\n
\n", + "system": "" + }, + { + "instruction": "Choosing between GeForce or Quadro GPUs to do machine learning via TensorFlow", + "input": "", + "output": "

I think GeForce TITAN is great and is widely used in Machine Learning (ML). In ML, single precision is enough in most of cases.

\n\n

More detail on the performance of the GTX line (currently GeForce 10) can be found in Wikipedia, here.

\n\n

Other sources around the web support this claim. Here is a quote from doc-ok in 2013 (permalink).

\n\n
\n

For comparison, an \u201centry-level\u201d $700 Quadro 4000 is significantly slower than a $530 high-end GeForce GTX 680, at least according to my measurements using several Vrui applications, and the closest performance-equivalent to a GeForce GTX 680 I could find was a Quadro 6000 for a whopping $3660.

\n
\n\n

Specific to ML, including deep learning, there is a Kaggle forum discussion dedicated to this subject (Dec 2014, permalink), which goes over comparisons between the Quadro, GeForce, and Tesla series:

\n\n
\n

Quadro GPUs aren't for scientific computation, Tesla GPUs are. Quadro\n cards are designed for accelerating CAD, so they won't help you to\n train neural nets. They can probably be used for that purpose just\n fine, but it's a waste of money.

\n \n

Tesla cards are for scientific computation, but they tend to be pretty\n expensive. The good news is that many of the features offered by Tesla\n cards over GeForce cards are not necessary to train neural networks.

\n \n

For example, Tesla cards usually have ECC memory, which is nice to\n have but not a requirement. They also have much better support for\n double precision computations, but single precision is plenty for\n neural network training, and they perform about the same as GeForce\n cards for that.

\n \n

One useful feature of Tesla cards is that they tend to have is a lot\n more RAM than comparable GeForce cards. More RAM is always welcome if\n you're planning to train bigger models (or use RAM-intensive\n computations like FFT-based convolutions).

\n \n

If you're choosing between Quadro and GeForce, definitely pick\n GeForce. If you're choosing between Tesla and GeForce, pick GeForce,\n unless you have a lot of money and could really use the extra RAM.

\n
\n\n

NOTE: Be careful what platform you are working on and what the default precision is in it. For example, here in the CUDA forums (August 2016), one developer owns two Titan X's (GeForce series) and doesn't see a performance gain in any of their R or Python scripts. This is diagnosed as a result of R being defaulted to double precision, and has a worse performance on new GPU than their CPU (a Xeon processor). Tesla GPUs are cited as the best performance for double precision. In this case, converting all numbers to float32 increases performance from 12.437s with nvBLAS 0.324s with gmatrix+float32s on one TITAN X (see first benchmark). Quoting from this forum discussion:

\n\n
\n

Double precision performance of Titan X is pretty low.

\n
\n", + "system": "" + }, + { + "instruction": "Where is the folder for Installing tensorflow with pip, Mac OSX?", + "input": "", + "output": "

Installing with pip, installs the packages to the directory \"site-packages\".

\n\n

The following code shows the location of tensorflow as well as where pip installs the packages:

\n\n
$ pip show tensorflow\n
\n\n

Which return:

\n\n
Metadata-Version: 2.0\nName: tensorflow\nVersion: 0.5.0\nSummary: TensorFlow helps the tensors flow\nHome-page: http://tensorflow.com/\nAuthor: Google Inc.\nAuthor-email: opensource@google.com\nLicense: Apache 2.0\nLocation: /usr/local/lib/python2.7/site-packages\nRequires: six, numpy\n
\n\n

here Location: shows where the package is installed with

\n\n
$ cd /usr/local/lib/python2.7/site-packages/tensorflow\n
\n\n
\n\n

EDIT:

\n\n

As some people pointed out in the newer versions of tensorflow and depending on the $ echo $TENSORFLOW you need to look in either

\n\n
$ cd /usr/local/lib/python{2,3}.X/{site,dist}-packages/tensorflow\n
\n\n

Or

\n\n
$ cd /usr/local/lib/python2.7/dist-packages/tensorflow/include/tensorflow/core/framework\n
\n", + "system": "" + }, + { + "instruction": "AttributeError: module 'setuptools._distutils' has no attribute 'version'", + "input": "", + "output": "

This is a known bug which has been patched: https://github.com/pytorch/pytorch/pull/69904

\n

You can either use the nightly-release of PyTorch, or otherwise downgrade setup tools to setuptools version 59.5.0:

\n

pip install setuptools==59.5.0

\n", + "system": "" + }, + { + "instruction": "Tensorflow tf.data AUTOTUNE", + "input": "", + "output": "

tf.data builds a performance model of the input pipeline and runs an optimization algorithm to find a good allocation of its CPU budget across all parameters specified as AUTOTUNE. While the input pipeline is running, tf.data tracks the time spent in each operation, so that these times can be fed into the optimization algorithm.

\n

The OptimizationOptions object gives some control over how autotune will behave.

\n", + "system": "" + }, + { + "instruction": "Machine Learning : Tensorflow v/s Tensorflow.js v/s Brain.js", + "input": "", + "output": "

The speeds are different: Tensorflow > tfjs > brainjs. Python can be directly compiled to machine code and directly use the CPU and GPU, whereas tfjs is a script-language which is being compiled on the client and has to use the <canvas> in the browser to access the GPU the same as brain.js (I am not sure if brain.js is GPU-accelerated)

\n\n

Another thing is that tensorflow is a whole ecosystem, which is kept in sync with each different version for the different platforms, so it is really easy to port your python(keras) model to tfjs and if you know how to code a tensorflow-model you can do it in any language.

\n\n

And if you're using nodejs the only reason to stay with tfjs and not switch to python is that you like the JavaScript language better or you are forced to use because you are working in a JS backend.

\n\n

PS:\nA new library was just released (ML5), which is a wrapper for tfjs and adds a lot of stuff, which helps you to build and use models without having a deep machine learning background.

\n", + "system": "" + }, + { + "instruction": "What is the definition of a non-trainable parameter?", + "input": "", + "output": "

In keras, non-trainable parameters (as shown in model.summary()) means the number of weights that are not updated during training with backpropagation.

\n\n

There are mainly two types of non-trainable weights:

\n\n\n\n

Weights are the values inside the network that perform the operations and can be adjusted to result in what we want. The backpropagation algorithm changes the weights towards a lower error at the end.

\n\n

By default, all weights in a keras model are trainable.

\n\n

When you create layers, internally it creates its own weights and they're trainable. (The backpropagation algorithm will update these weights)

\n\n

When you make them untrainable, the algorithm will not update these weights anymore. This is useful, for instance, when you want a convolutional layer with a specific filter, like a Sobel filter, for instance. You don't want the training to change this operation, so these weights/filters should be kept constant.

\n\n

There is a lot of other reasons why you might want to make weights untrainable.

\n\n
\n\n

Changing parameters:

\n\n

For deciding whether weights are trainable or not, you take layers from the model and set trainable:

\n\n
model.get_layer(layerName).trainable = False #or True\n
\n\n

This must be done before compilation.

\n", + "system": "" + }, + { + "instruction": "What is the difference between model.fit() an model.evaluate() in Keras?", + "input": "", + "output": "

fit() is for training the model with the given inputs (and corresponding training labels).

\n\n

evaluate() is for evaluating the already trained model using the validation (or test) data and the corresponding labels. Returns the loss value and metrics values for the model.

\n\n

predict() is for the actual prediction. It generates output predictions for the input samples.

\n\n

Let us consider a simple regression example:

\n\n
# input and output\nx = np.random.uniform(0.0, 1.0, (200))\ny = 0.3 + 0.6*x + np.random.normal(0.0, 0.05, len(y))\n
\n\n

\"enter

\n\n

Now lets apply a regression model in keras:

\n\n
# A simple regression model\nmodel = Sequential()\nmodel.add(Dense(1, input_shape=(1,)))\nmodel.compile(loss='mse', optimizer='rmsprop')\n\n# The fit() method - trains the model\nmodel.fit(x, y, nb_epoch=1000, batch_size=100)\n\nEpoch 1000/1000\n200/200 [==============================] - 0s - loss: 0.0023\n\n# The evaluate() method - gets the loss statistics\nmodel.evaluate(x, y, batch_size=200)     \n# returns: loss: 0.0022612824104726315\n\n# The predict() method - predict the outputs for the given inputs\nmodel.predict(np.expand_dims(x[:3],1)) \n# returns: [ 0.65680361],[ 0.70067143],[ 0.70482892]\n
\n", + "system": "" + }, + { + "instruction": "Keras: How to get layer shapes in a Sequential model", + "input": "", + "output": "

If you want the output printed in a fancy way:

\n\n
model.summary()\n
\n\n

If you want the sizes in an accessible form

\n\n
for layer in model.layers:\n    print(layer.get_output_at(0).get_shape().as_list())\n
\n\n

There are probably better ways to access the shapes than this. Thanks to Daniel for the inspiration.

\n", + "system": "" + }, + { + "instruction": "After building TensorFlow from source, seeing libcudart.so and libcudnn errors", + "input": "", + "output": "

First, for the following error:

\n\n
\n

ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory

\n
\n\n

make sure your LD_LIBRARY_PATH includes your lib64 directory in whichever path you installed your cuda package in. You can do this by adding an export line in your .bashrc. For Omar, it looked like the following:

\n\n
\n

I fixed this just adding the cuda path to my .bashrc

\n \n
\n

export LD_LIBRARY_PATH=/usr/local/cuda/lib64/

\n
\n
\n\n
\n\n

For me, I had to do Omar's line and also:\n export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64/\nbecause I have two directories involving cuda (probably not the best).

\n\n
\n\n

Second, are you sure you installed cuDNN? Note that this is different from the regular cuda package. You will need to register, then download and install the package from the following page:\nhttps://developer.nvidia.com/cudnn

\n\n
\n\n

Third, I had this same problem:

\n\n
\n

ImportError: libcudnn.5: cannot open shared object file: No such file or directory

\n
\n\n

It turns out there is no libcudnn.5 in my /usr/local/cuda/lib64 or /usr/local/cuda-8.0/lib64 directories. However, I do have a libcudnn.so.6.* file. To solve the problem, I created a soft link:

\n\n
ln -s libcudnn.so.6.* libcudnn.so.5\n
\n\n

in my /usr/local/cuda/lib64 directory. Now everything works for me. Your directory might be different if you already had cuDNN, and your libcudnn.so.6.* might be a different version, so check that.

\n", + "system": "" + }, + { + "instruction": "How does Keras define "accuracy" and "loss"?", + "input": "", + "output": "

Have a look at metrics.py, there you can find definition of all available metrics including different types of accuracy. Accuracy is not printed unless you add it to the list of desired metrics when you compile your model.

\n\n

Regularizers are by definition added to the loss. For example, see add_loss method of the Layerclass.

\n\n

Update

\n\n

The type of accuracy is determined based on the objective function, see training.py. The default choice is categorical_accuracy. Other types like binary_accuracy and sparse_categorical_accuracy are selected when the objective function is either binary or sparse.

\n", + "system": "" + }, + { + "instruction": "TensorFlow: numpy.repeat() alternative", + "input": "", + "output": "

You can achieve the effect of np.repeat() using a combination of tf.tile() and tf.reshape():

\n\n
idx = tf.range(len(yp))\nidx = tf.reshape(idx, [-1, 1])    # Convert to a len(yp) x 1 matrix.\nidx = tf.tile(idx, [1, len(yp)])  # Create multiple columns.\nidx = tf.reshape(idx, [-1])       # Convert back to a vector.\n
\n\n

You can simply compute jdx using tf.tile():

\n\n
jdx = tf.range(len(yp))\njdx = tf.tile(jdx, [len(yp)])\n
\n\n

For the indexing, you could try using tf.gather() to extract non-contiguous slices from the yp tensor:

\n\n
s = tf.gather(yp, idx) - tf.gather(yp, jdx)\n
\n", + "system": "" + }, + { + "instruction": "After building TensorFlow from source, seeing libcudart.so and libcudnn errors", + "input": "", + "output": "

First, for the following error:

\n\n
\n

ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory

\n
\n\n

make sure your LD_LIBRARY_PATH includes your lib64 directory in whichever path you installed your cuda package in. You can do this by adding an export line in your .bashrc. For Omar, it looked like the following:

\n\n
\n

I fixed this just adding the cuda path to my .bashrc

\n \n
\n

export LD_LIBRARY_PATH=/usr/local/cuda/lib64/

\n
\n
\n\n
\n\n

For me, I had to do Omar's line and also:\n export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64/\nbecause I have two directories involving cuda (probably not the best).

\n\n
\n\n

Second, are you sure you installed cuDNN? Note that this is different from the regular cuda package. You will need to register, then download and install the package from the following page:\nhttps://developer.nvidia.com/cudnn

\n\n
\n\n

Third, I had this same problem:

\n\n
\n

ImportError: libcudnn.5: cannot open shared object file: No such file or directory

\n
\n\n

It turns out there is no libcudnn.5 in my /usr/local/cuda/lib64 or /usr/local/cuda-8.0/lib64 directories. However, I do have a libcudnn.so.6.* file. To solve the problem, I created a soft link:

\n\n
ln -s libcudnn.so.6.* libcudnn.so.5\n
\n\n

in my /usr/local/cuda/lib64 directory. Now everything works for me. Your directory might be different if you already had cuDNN, and your libcudnn.so.6.* might be a different version, so check that.

\n", + "system": "" + }, + { + "instruction": "How to convert keras(h5) file to a tflite file?", + "input": "", + "output": "
from tensorflow.contrib import lite\nconverter = lite.TFLiteConverter.from_keras_model_file( 'model.h5')\ntfmodel = converter.convert()\nopen (\"model.tflite\" , \"wb\") .write(tfmodel)\n
\n\n

You can use the TFLiteConverter to directly convert .h5 files to .tflite file.\nThis does not work on Windows.

\n\n

For Windows, use this Google Colab notebook to convert. Upload the .h5 file and it will convert it .tflite file.

\n\n

Follow, if you want to try it yourself :

\n\n
    \n
  1. Create a Google Colab Notebook. In the left top corner, click the \"UPLOAD\" button and upload your .h5 file.
  2. \n
  3. Create a code cell and insert this code.

    \n\n
    from tensorflow.contrib import lite\nconverter = lite.TFLiteConverter.from_keras_model_file( 'model.h5' ) # Your model's name\nmodel = converter.convert()\nfile = open( 'model.tflite' , 'wb' ) \nfile.write( model )\n
  4. \n
  5. Run the cell. You will get a model.tflite file. Right click on the file and select \"DOWNLOAD\" option.

  6. \n
\n", + "system": "" + }, + { + "instruction": "Tensorflow: ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory", + "input": "", + "output": "

Found the solution:

\n\n

I reinstalled nvidia-381, CUDA-8.0 (using the runfile) and cuDNN 6.0. Then I added the following in my .bashrc:

\n\n
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64/\n
\n", + "system": "" + }, + { + "instruction": "ImportError: cannot import name 'get_config' from 'tensorflow.python.eager.context'", + "input": "", + "output": "

Instead of:

\n
import keras\n
\n

Try:

\n
from tensorflow import keras \n
\n", + "system": "" + }, + { + "instruction": "Why Bert transformer uses [CLS] token for classification instead of average over all tokens?", + "input": "", + "output": "

BERT is designed primarily for transfer learning, i.e., finetuning on task-specific datasets. If you average the states, every state is averaged with the same weight: including stop words or other stuff that are not relevant for the task. The [CLS] vector gets computed using self-attention (like everything in BERT), so it can only collect the relevant information from the rest of the hidden states. So, in some sense the [CLS] vector is also an average over token vectors, only more cleverly computed, specifically for the tasks that you fine-tune on.

\n

Also, my experience is that when I keep the weights fixed and do not fine-tune BERT, using the token average yields better results.

\n", + "system": "" + }, + { + "instruction": "parallelising tf.data.Dataset.from_generator", + "input": "", + "output": "

\nTurns out I can use Dataset.map if I make the generator super lightweight (only generating meta data) and then move the actual heavy lighting into a stateless function. This way I can parallelise just the heavy lifting part with .map using a py_func.

\n\n

Works; but feels a tad clumsy... Would be great to be able to just add num_parallel_calls to from_generator :)

\n\n
def pure_numpy_and_pil_complex_calculation(metadata, label):\n  # some complex pil and numpy work nothing to do with tf\n  ...\n\ndataset = tf.data.Dataset.from_generator(lightweight_generator,\n                                         output_types=(tf.string,   # metadata\n                                                       tf.string))  # label\n\ndef wrapped_complex_calulation(metadata, label):\n  return tf.py_func(func = pure_numpy_and_pil_complex_calculation,\n                    inp = (metadata, label),\n                    Tout = (tf.uint8,    # (H,W,3) img\n                            tf.string))  # label\ndataset = dataset.map(wrapped_complex_calulation,\n                      num_parallel_calls=8)\n\ndataset = dataset.batch(64)\niter = dataset.make_one_shot_iterator()\nimgs, labels = iter.get_next()\n
\n", + "system": "" + }, + { + "instruction": "CUDA_HOME path for Tensorflow", + "input": "", + "output": "

Run the following command in the terminal:

\n\n
export CUDA_HOME=/usr/local/cuda-X.X\n
\n\n

Where you replace X.X by the first two digits of your version number (can be found out e.g. via nvcc --version).

\n", + "system": "" + }, + { + "instruction": "LSTM Autoencoder", + "input": "", + "output": "

Models can be any way you want. If I understood it right, you just want to know how to create models with LSTM?

\n\n

Using LSTMs

\n\n

Well, first, you have to define what your encoded vector looks like. Suppose you want it to be an array of 20 elements, a 1-dimension vector. So, shape (None,20). The size of it is up to you, and there is no clear rule to know the ideal one.

\n\n

And your input must be three-dimensional, such as your (1200,10,5). In keras summaries and error messages, it will be shown as (None,10,5), as \"None\" represents the batch size, which can vary each time you train/predict.

\n\n

There are many ways to do this, but, suppose you want only one LSTM layer:

\n\n
from keras.layers import *\nfrom keras.models import Model\n\ninpE = Input((10,5)) #here, you don't define the batch size   \noutE = LSTM(units = 20, return_sequences=False, ...optional parameters...)(inpE)\n
\n\n

This is enough for a very very simple encoder resulting in an array with 20 elements (but you can stack more layers if you want). Let's create the model:

\n\n
encoder = Model(inpE,outE)   \n
\n\n

Now, for the decoder, it gets obscure. You don't have an actual sequence anymore, but a static meaningful vector. You may want to use LTSMs still, they will suppose the vector is a sequence.

\n\n

But here, since the input has shape (None,20), you must first reshape it to some 3-dimensional array in order to attach an LSTM layer next.

\n\n

The way you will reshape it is entirely up to you. 20 steps of 1 element? 1 step of 20 elements? 10 steps of 2 elements? Who knows?

\n\n
inpD = Input((20,))   \noutD = Reshape((10,2))(inpD) #supposing 10 steps of 2 elements    \n
\n\n

It's important to notice that if you don't have 10 steps anymore, you won't be able to just enable \"return_sequences\" and have the output you want. You'll have to work a little. Acually, it's not necessary to use \"return_sequences\" or even to use LSTMs, but you may do that.

\n\n

Since in my reshape I have 10 timesteps (intentionally), it will be ok to use \"return_sequences\", because the result will have 10 timesteps (as the initial input)

\n\n
outD1 = LSTM(5,return_sequences=True,...optional parameters...)(outD)    \n#5 cells because we want a (None,10,5) vector.   \n
\n\n

You could work in many other ways, such as simply creating a 50 cell LSTM without returning sequences and then reshaping the result:

\n\n
alternativeOut = LSTM(50,return_sequences=False,...)(outD)    \nalternativeOut = Reshape((10,5))(alternativeOut)\n
\n\n

And our model goes:

\n\n
decoder = Model(inpD,outD1)  \nalternativeDecoder = Model(inpD,alternativeOut)   \n
\n\n

After that, you unite the models with your code and train the autoencoder. \nAll three models will have the same weights, so you can make the encoder bring results just by using its predict method.

\n\n
encoderPredictions = encoder.predict(data)\n
\n\n
\n\n

What I often see about LSTMs for generating sequences is something like predicting the next element.

\n\n

You take just a few elements of the sequence and try to find the next element. And you take another segment one step forward and so on. This may be helpful in generating sequences.

\n", + "system": "" + }, + { + "instruction": "Running Tensorflow in Jupyter Notebook", + "input": "", + "output": "

I came up with your case. This is how I sort it out

\n
    \n
  1. Install Anaconda
  2. \n
  3. Create a virtual environment - conda create -n tensorflow
  4. \n
  5. Go inside your virtual environment - (on macOS/Linux:) source activate tensorflow (on Windows: activate tensorflow)
  6. \n
  7. Inside that install tensorflow. You can install it using pip
  8. \n
  9. Finish install
  10. \n
\n

So then the next thing, when you launch it:

\n
    \n
  1. If you are not inside the virtual environment type - Source Activate Tensorflow
  2. \n
  3. Then inside this again install your Jupiter notebook and Pandas libraries, because there can be some missing in this virtual environment
  4. \n
\n

Inside the virtual environment just type:

\n
    \n
  1. pip install jupyter notebook
  2. \n
  3. pip install pandas
  4. \n
\n

Then you can launch jupyter notebook saying:

\n
    \n
  1. jupyter notebook
  2. \n
  3. Select the correct terminal python 3 or 2
  4. \n
  5. Then import those modules
  6. \n
\n", + "system": "" + }, + { + "instruction": "Update TensorFlow", + "input": "", + "output": "
(tensorflow)$ pip install --upgrade pip  # for Python 2.7\n(tensorflow)$ pip3 install --upgrade pip # for Python 3.n\n\n(tensorflow)$ pip install --upgrade tensorflow      # for Python 2.7\n(tensorflow)$ pip3 install --upgrade tensorflow     # for Python 3.n\n(tensorflow)$ pip install --upgrade tensorflow-gpu  # for Python 2.7 and GPU\n(tensorflow)$ pip3 install --upgrade tensorflow-gpu # for Python 3.n and GPU\n\n(tensorflow)$ pip install --upgrade tensorflow-gpu==1.4.1 # for a specific version\n
\n\n

Details on install tensorflow.

\n", + "system": "" + }, + { + "instruction": "How to convert one-hot encodings into integers?", + "input": "", + "output": "

You can use numpy.argmax or tf.argmax. Example:

\n\n
import numpy as np  \na  = np.array([[0,1,0,0],[1,0,0,0],[0,0,0,1]])\nprint('np.argmax(a, axis=1): {0}'.format(np.argmax(a, axis=1)))\n
\n\n

output:

\n\n
np.argmax(a, axis=1): [1 0 3]\n
\n\n

You may also want to look at sklearn.preprocessing.LabelBinarizer.inverse_transform.

\n", + "system": "" + }, + { + "instruction": "Linear vs nonlinear neural network?", + "input": "", + "output": "

For starters, a neural network can model any function (not just linear functions) Have a look at this - http://neuralnetworksanddeeplearning.com/chap4.html.

\n\n

A Neural Network has got non linear activation layers which is what gives the Neural Network a non linear element.

\n\n

The function for relating the input and the output is decided by the neural network and the amount of training it gets. If you supply two variables having a linear relationship, then your network will learn this as long as you don't overfit. Similarly, a complex enough neural network can learn any function.

\n", + "system": "" + }, + { + "instruction": "How do you read Tensorboard files programmatically?", + "input": "", + "output": "

You can use TensorBoard's Python classes or script to extract the data:

\n

How can I export data from TensorBoard?

\n
\n

If you'd like to export data to visualize elsewhere (e.g. iPython Notebook), that's possible too. You can directly depend on the underlying classes that TensorBoard uses for loading data: python/summary/event_accumulator.py (for loading data from a single run) or python/summary/event_multiplexer.py (for loading data from multiple runs, and keeping it organized). These classes load groups of event files, discard data that was "orphaned" by TensorFlow crashes, and organize the data by tag.

\n

As another option, there is a script (tensorboard/scripts/serialize_tensorboard.py) which will load a logdir just like TensorBoard does, but write all of the data out to disk as json instead of starting a server. This script is setup to make "fake TensorBoard backends" for testing, so it is a bit rough around the edges.

\n
\n

Using EventAccumulator:

\n
# In [1]: from tensorflow.python.summary import event_accumulator  # deprecated\nIn [1]: from tensorboard.backend.event_processing import event_accumulator\n\nIn [2]: ea = event_accumulator.EventAccumulator('events.out.tfevents.x.ip-x-x-x-x',\n   ...:  size_guidance={ # see below regarding this argument\n   ...:      event_accumulator.COMPRESSED_HISTOGRAMS: 500,\n   ...:      event_accumulator.IMAGES: 4,\n   ...:      event_accumulator.AUDIO: 4,\n   ...:      event_accumulator.SCALARS: 0,\n   ...:      event_accumulator.HISTOGRAMS: 1,\n   ...:  })\n\nIn [3]: ea.Reload() # loads events from file\nOut[3]: <tensorflow.python.summary.event_accumulator.EventAccumulator at 0x7fdbe5ff59e8>\n\nIn [4]: ea.Tags()\nOut[4]: \n{'audio': [],\n 'compressedHistograms': [],\n 'graph': True,\n 'histograms': [],\n 'images': [],\n 'run_metadata': [],\n 'scalars': ['Loss', 'Epsilon', 'Learning_rate']}\n\nIn [5]: ea.Scalars('Loss')\nOut[5]: \n[ScalarEvent(wall_time=1481232633.080754, step=1, value=1.6365480422973633),\n ScalarEvent(wall_time=1481232633.2001867, step=2, value=1.2162202596664429),\n ScalarEvent(wall_time=1481232633.3877788, step=3, value=1.4660096168518066),\n ScalarEvent(wall_time=1481232633.5749283, step=4, value=1.2405034303665161),\n ScalarEvent(wall_time=1481232633.7419815, step=5, value=0.897326648235321),\n ...]\n
\n

size_guidance:

\n
size_guidance: Information on how much data the EventAccumulator should\n  store in memory. The DEFAULT_SIZE_GUIDANCE tries not to store too much\n  so as to avoid OOMing the client. The size_guidance should be a map\n  from a `tagType` string to an integer representing the number of\n  items to keep per tag for items of that `tagType`. If the size is 0,\n  all events are stored.\n
\n", + "system": "" + }, + { + "instruction": "How to manually create a tf.Summary()", + "input": "", + "output": "

You can create a tf.Summary object in your Python program and write it to the same tf.summary.FileWriter object that takes your TensorFlow-produced summaries using the SummaryWriter.add_summary() method.

\n\n

The tf.Summary class is a Python protocol buffer wrapper for the Summary protocol buffer. Each Summary contains a list of tf.Summary.Value protocol buffers, which each have a tag and a either a \"simple\" (floating-point scalar) value, an image, a histogram, or an audio snippet. For example, you can generate a scalar summary from a Python object as follows:

\n\n
writer = tf.train.SummaryWriter(...)\nvalue = 37.0\nsummary = tf.Summary(value=[\n    tf.Summary.Value(tag=\"summary_tag\", simple_value=value), \n])\nwriter.add_summary(summary)\n
\n", + "system": "" + }, + { + "instruction": "Compute pairwise distance in a batch without replicating tensor in Tensorflow?", + "input": "", + "output": "

You can use some linear algebra to turn it into matrix ops. Note that what you need matrix D where a[i] is the ith row of your original matrix and

\n\n
D[i,j] = (a[i]-a[j])(a[i]-a[j])'\n
\n\n

You can rewrite that into

\n\n
D[i,j] = r[i] - 2 a[i]a[j]' + r[j]\n
\n\n

Where r[i] is squared norm of ith row of the original matrix.

\n\n

In a system that supports standard broadcasting rules you can treat r as a column vector and write D as

\n\n
D = r - 2 A A' + r'\n
\n\n

In TensorFlow you could write this as

\n\n
A = tf.constant([[1, 1], [2, 2], [3, 3]])\nr = tf.reduce_sum(A*A, 1)\n\n# turn r into column vector\nr = tf.reshape(r, [-1, 1])\nD = r - 2*tf.matmul(A, tf.transpose(A)) + tf.transpose(r)\nsess = tf.Session()\nsess.run(D)\n
\n\n

result

\n\n
array([[0, 2, 8],\n       [2, 0, 2],\n       [8, 2, 0]], dtype=int32)\n
\n", + "system": "" + }, + { + "instruction": "od_graph_def = tf.GraphDef() AttributeError: module 'tensorflow' has no attribute 'GraphDef'", + "input": "", + "output": "

Yeah, the syntax has changed in T2.0. Here's the correct piece:

\n\n
tf.compat.v1.GraphDef()   # -> instead of tf.GraphDef()\ntf.compat.v2.io.gfile.GFile()   # -> instead of tf.gfile.GFile()\n
\n", + "system": "" + }, + { + "instruction": "How do I split Tensorflow datasets?", + "input": "", + "output": "

You may use Dataset.take() and Dataset.skip():

\n\n
train_size = int(0.7 * DATASET_SIZE)\nval_size = int(0.15 * DATASET_SIZE)\ntest_size = int(0.15 * DATASET_SIZE)\n\nfull_dataset = tf.data.TFRecordDataset(FLAGS.input_file)\nfull_dataset = full_dataset.shuffle()\ntrain_dataset = full_dataset.take(train_size)\ntest_dataset = full_dataset.skip(train_size)\nval_dataset = test_dataset.skip(test_size)\ntest_dataset = test_dataset.take(test_size)\n
\n\n

For more generality, I gave an example using a 70/15/15 train/val/test split but if you don't need a test or a val set, just ignore the last 2 lines.

\n\n

Take:

\n\n
\n

Creates a Dataset with at most count elements from this dataset.

\n
\n\n

Skip:

\n\n
\n

Creates a Dataset that skips count elements from this dataset.

\n
\n\n

You may also want to look into Dataset.shard():

\n\n
\n

Creates a Dataset that includes only 1/num_shards of this dataset.

\n
\n", + "system": "" + }, + { + "instruction": "Choosing number of Steps per Epoch", + "input": "", + "output": "

Based on what you said it sounds like you need a larger batch_size, and of course there are implications with that which could impact the steps_per_epoch and number of epochs.

\n\n

To solve for jumping-around

\n\n\n\n

Implications of a larger batch-size

\n\n\n\n

When to reduce epochs

\n\n\n\n

When to adjust steps-per-epoch

\n\n\n", + "system": "" + }, + { + "instruction": "No module named 'absl' error when I import tensorflow", + "input": "", + "output": "

This was caused by a Python version issue for me. I had the absl package installed on my Python 2.x, but my Python 3.x didn't have it. So I just made sure that both Pythons on my machine had the package installed:

\n\n

pip install absl-py
\npip3 install absl-py

\n", + "system": "" + }, + { + "instruction": "How do I specify nvidia runtime from docker-compose.yml?", + "input": "", + "output": "

Currently (Aug 2018), NVIDIA container runtime for Docker (nvidia-docker2) supports Docker Compose.

\n\n
\n

Yes, use Compose format 2.3 and add runtime: nvidia to your GPU service. Docker Compose must be version 1.19.0 or higher.

\n
\n\n

Example docker-compose.yml:

\n\n
version: '2.3'\n\nservices:\n  nvsmi:\n    image: ubuntu:16.04\n    runtime: nvidia\n    environment:\n      - NVIDIA_VISIBLE_DEVICES=all\n    command: nvidia-smi\n
\n\n

More example from NVIDIA blog uses Docker Compose to show how to launch multiple GPU containers with the NVIDIA Container Runtime.

\n", + "system": "" + }, + { + "instruction": "Tensorflow import error: No module named 'tensorflow'", + "input": "", + "output": "

The reason Python 3.5 environment is unable to import Tensorflow is that Anaconda does not store the tensorflow package in the same environment.

\n\n

One solution is to create a new separate environment in Anaconda dedicated to TensorFlow with its own Spyder

\n\n
conda create -n newenvt anaconda python=3.5\nactivate newenvt\n
\n\n

and then install tensorflow into newenvt

\n\n

I found this primer helpful

\n", + "system": "" + }, + { + "instruction": "Tensorflow: loss decreasing, but accuracy stable", + "input": "", + "output": "

A decrease in binary cross-entropy loss does not imply an increase in accuracy. Consider label 1, predictions 0.2, 0.4 and 0.6 at timesteps 1, 2, 3 and classification threshold 0.5. timesteps 1 and 2 will produce a decrease in loss but no increase in accuracy.

\n\n

Ensure that your model has enough capacity by overfitting the training data. If the model is overfitting the training data, avoid overfitting by using regularization techniques such as dropout, L1 and L2 regularization and data augmentation.

\n\n

Last, confirm your validation data and training data come from the same distribution.

\n", + "system": "" + }, + { + "instruction": "looking for source code of from gen_nn_ops in tensorflow", + "input": "", + "output": "

You can't find this source because the source is automatically generated by bazel. If you build from source, you'll see this file inside bazel-genfiles. It's also present in your local distribution which you can find using inspect module. The file contains automatically generated Python wrappers to underlying C++ implementations, so it basically consists of a bunch of 1-line functions. A shortcut to find underlying C++ implementation of such generated Python op is to convert snake case to camel case, ie conv2d_backprop_input -> Conv2dBackpropInput

\n\n
# figure out where gen_nn_ops is\nprint(tf.nn.conv2d_transpose.__globals__['gen_nn_ops'])\n\nfrom tensorflow.python.ops import gen_nn_ops\nimport inspect\ninspect.getsourcefile('gen_nn_ops.conv2d_backprop_input')\n'/Users/yaroslav/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/gen_nn_ops.py'\n
\n\n

If you cared to find out how this file really came about, you could follow the trail of bazel dependencies in BUILD files. It to find Bazel target that generated it from tensorflow source tree:

\n\n
fullname=$(bazel query tensorflow/python/ops/gen_nn_ops.py)\nbazel query \"attr('srcs', $fullname, ${fullname//:*/}:*)\"\n\n//tensorflow/python:nn_ops_gen\n
\n\n

So now going to BUILD file inside tensorflow/python you see that this is a target of type tf_gen_op_wrapper_private_py which is defined here and calls gen_op_wrapper_py from tensorflow/tensorflow.bzl which looks like this

\n\n
def tf_gen_op_wrapper_py(name, out=None, hidden=None, visibility=None, deps=[],\n....\n      native.cc_binary(\n      name = tool_name,\n
\n\n

This native.cc_binary construct is a way to have Bazel target that represents execution of an arbitrary command. In this case it calls tool_name with some arguments. With a couple more steps you can find that \"tool\" here is compiled from framework/python_op_gen_main.cc

\n\n

The reason for this complication is that TensorFlow was designed to be language agnostic. So in ideal world you would have each op described in ops.pbtxt, and then each op would have one implementation per hardware type using REGISTER_KERNEL_BUILDER, so all implementations would be done in C++/CUDA/Assembly and become automatically available to all language front-ends. There would be an equivalent translator op like \"python_op_gen_main\" for every language and all client library code would be automatically generated. However, because Python is so dominant, there was pressure to add features on the Python side. So now there are two kinds of ops -- pure TensorFlow ops seen in files like gen_nn_ops.py, and Python-only ops in files like nn_ops.py which typically wrap ops automatically generated files gen_nn_ops.py but add extra features/syntax sugar. Also, originally all names were camel-case, but it was decided that public facing release should be PEP compliant with more common Python syntax, so this is a reason for camel-case/snake-case mismatch between C++/Python interfaces of same op

\n", + "system": "" + }, + { + "instruction": "Tensorflow variable scope: reuse if variable exists", + "input": "", + "output": "

A ValueError is raised in get_variable() when creating a new variable and shape is not declared, or when violating reuse during variable creation. Therefore, you can try this:

\n\n
def get_scope_variable(scope_name, var, shape=None):\n    with tf.variable_scope(scope_name) as scope:\n        try:\n            v = tf.get_variable(var, shape)\n        except ValueError:\n            scope.reuse_variables()\n            v = tf.get_variable(var)\n    return v\n\nv1 = get_scope_variable('foo', 'v', [1])\nv2 = get_scope_variable('foo', 'v')\nassert v1 == v2\n
\n\n

Note that the following also works:

\n\n
v1 = get_scope_variable('foo', 'v', [1])\nv2 = get_scope_variable('foo', 'v', [1])\nassert v1 == v2\n
\n\n
\n\n

UPDATE. The new API supports auto-reusing now:

\n\n
def get_scope_variable(scope, var, shape=None):\n    with tf.variable_scope(scope, reuse=tf.AUTO_REUSE):\n        v = tf.get_variable(var, shape)\n    return v\n
\n", + "system": "" + }, + { + "instruction": "How does TensorFlow name tensors?", + "input": "", + "output": "

Your observations on Tensor naming are absolutely correct: the name of a Tensor is the concatenation of

\n\n
    \n
  1. the name of the operation that produced it,
  2. \n
  3. a colon (:), and
  4. \n
  5. the index of that tensor in the outputs of the operation that produced it.
  6. \n
\n\n

Therefore the tensor named \"foo:2\" is the output of the op named \"foo\" at position 2 (with indices starting from zero).

\n\n

The naming of tf.Variable objects is slightly strange. Every tf.Variable contains a mutable tensor object that holds the state of the variable (and a few other tensors). A \"Variable\" op (which has the name \"variable_name\" in your example) \"produces\" this mutable tensor each time it is run as its 0th output, so the name of the mutable tensor is \"variable_name:0\".

\n\n

Since a tf.Variable is mostly indistinguishable from a tf.Tensor—in that it can be used in the same places—we took the decision to make variable names resemble tensor names, so the Variable.name property returns the name of the mutable tensor. (This contrasts with tf.QueueBase and tf.ReaderBase objects, which are not usable directly as tensors (instead you have to call methods on them to create ops that operate on their state), so these do not have a tensor-like name.)

\n", + "system": "" + }, + { + "instruction": "Tensorflow Precision / Recall / F1 score and Confusion matrix", + "input": "", + "output": "

You do not really need sklearn to calculate precision/recall/f1 score. You can easily express them in TF-ish way by looking at the formulas:

\n\n

\"enter

\n\n

Now if you have your actual and predicted values as vectors of 0/1, you can calculate TP, TN, FP, FN using tf.count_nonzero:

\n\n
TP = tf.count_nonzero(predicted * actual)\nTN = tf.count_nonzero((predicted - 1) * (actual - 1))\nFP = tf.count_nonzero(predicted * (actual - 1))\nFN = tf.count_nonzero((predicted - 1) * actual)\n
\n\n

Now your metrics are easy to calculate:

\n\n
precision = TP / (TP + FP)\nrecall = TP / (TP + FN)\nf1 = 2 * precision * recall / (precision + recall)\n
\n", + "system": "" + }, + { + "instruction": "Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found", + "input": "", + "output": "

I think I can help you with providing a cudnn64_8.dll file (this is the download link: https://www.dll-files.com/cudnn64_8.dll.html). When you get the file, you can just put in your bin directory. For example, usually in windows platform, you can put it into C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.3\\bin.

\n", + "system": "" + }, + { + "instruction": "TensorFlow: How and why to use SavedModel", + "input": "", + "output": "

EDIT: I wrote this back at TensorFlow 1.4. As of today (TensorFlow 1.12 is stable, there's a 1.13rc and 2.0 is around the corner) the docs linked in the question are much improved.

\n\n
\n\n

I'm trying to use tf.saved_model and also found the Docs quite (too) abstract. Here's my stab at a full answer to your questions:

\n\n

1. signature_def_map:

\n\n

a. Format See Tom's answer to Tensorflow: how to save/restore a model. (Ctrl-F for \"tf.saved_model\" - currently, the only uses of the phrase on that question are in his answer).

\n\n

b. need It's my understanding that you do normally need it. If you intend to use the model, you need to know the inputs and outputs of the graph. I think it is akin to a C++ function signature: If you intend to define a function after it's called or in another C++ file, you need the signature in your main file (i.e. prototyped or in a header file).

\n\n

2. assets_collection:

\n\n

format: Couldn't find clear documentation, so I went to the builder source code. It appears that the argument is an iterable of Tensors of dtype=tf.string, where each Tensor is a path for the asset directory. So, a TensorFlow Graph collection should work. I guess that is the parameter's namesake, but from the source code I would expect a Python list to work too.

\n\n

(You didn't ask if you need to set it, but judging from Zoe's answer to What are assets in tensorflow? and iga's answer to the tangentially related Tensorflow serving: \u201cNo assets to save/writes\u201d when exporting models, it doesn't usually need set.)

\n\n

3. Tags:

\n\n

a. Why list I don't know why you must pass a list, but you may pass a list with one element. For instance, in my current project I only use the [tf...tag_constants.SERVING] tag.

\n\n

b. When to use multiple Say you're using explicit device placement for operations. Maybe you want to save a CPU version and a GPU version of your graph. Obviously you want to save a serving version of each, and say you want to save training checkpoints. You could use a CPU/GPU tag and a training/serving tag to manage all cases. The docs hint at it:

\n\n
\n

Each MetaGraphDef added to the SavedModel must be annotated with user-specified tags. The tags provide a means to identify the specific MetaGraphDef to load and restore, along with the shared set of variables and assets. These tags typically annotate a MetaGraphDef with its functionality (for example, serving or training), and optionally with hardware-specific aspects (for example, GPU).

\n
\n\n

c. Collision\nToo lazy to force a collision myself - I see two cases that would need addressed - I went to the loader source code. Inside def load, you'll see:

\n\n
saved_model = _parse_saved_model(export_dir)\nfound_match = False\nfor meta_graph_def in saved_model.meta_graphs:\n  if set(meta_graph_def.meta_info_def.tags) == set(tags):\n    meta_graph_def_to_load = meta_graph_def\n    found_match = True\n    break\n\nif not found_match:\n  raise RuntimeError(\n      \"MetaGraphDef associated with tags \" + str(tags).strip(\"[]\") +\n      \" could not be found in SavedModel. To inspect available tag-sets in\"\n      \" the SavedModel, please use the SavedModel CLI: `saved_model_cli`\"\n  )\n
\n\n

It appears to me that it's looking for an exact match. E.g. say you have a metagraph with tags \"GPU\" and \"Serving\" and a metagraph with tag \"Serving\". If you load \"Serving\", you'll get the latter metagraph. On the other hand, say you have a metagraph \"GPU\" and \"Serving\" and a metagraph \"CPU\" and \"Serving\". If you try to load \"Serving\", you'll get the error. If you try to save two metagraphs with the exact same tags in the same folder, I expect you'll overwrite the first one. It doesn't look like the build code handles such a collision in any special way.

\n\n

4. SavedModel or tf.train.Saver:

\n\n

This confused me too. wicke's answer to Should TensorFlow users prefer SavedModel over Checkpoint or GraphDef? cleared it up for me. I'll throw in my two cents:

\n\n

In the scope of local Python+TensorFlow, you can make tf.train.Saver do everything. But, it will cost you. Let me outline the save-a-trained-model-and-deploy use case. You'll need your saver object. It's easiest to set it up to save the complete graph (every variable). You probably don't want to save the .meta all the time since you're working with a static graph. You'll need to specify that in your training hook. You can read about that on cv-tricks. When your training finishes, you'll need convert your checkpoint file to a pb file. That usually means clearing the current graph, restoring the checkpoint, freezing your variables to constants with tf.python.framework.graph_util, and writing it with tf.gfile.GFile. You can read about that on medium. After that, you want to deploy it in Python. You'll need the input and output Tensor names - the string names in the graph def. You can read about that on metaflow (actually a very good blog post for the tf.train.Saver method). Some op nodes will let you feed data into them easily. Some not so much. I usually gave up on finding an appropriate node and added a tf.reshape that didn't actually reshape anything to the graph def. That was my ad-hoc input node. Same for the output. And then finally, you can deploy your model, at least locally in Python.

\n\n

Or, you could use the answer I linked in point 1 to accomplish all this with the SavedModel API. Less headaches thanks to Tom's answer . You'll get more support and features in the future if it ever gets documented appropriately . Looks like it's easier to use command line serving (the medium link covers doing that with Saver - looks tough, good luck!). It's practically baked in to the new Estimators. And according to the Docs,

\n\n
\n

SavedModel is a language-neutral, recoverable, hermetic serialization format.

\n
\n\n

Emphasis mine: Looks like you can get your trained models into the growing C++ API much easier.

\n\n

The way I see it, it's like the Datasets API. It's just easier than the old way!

\n\n

As far as concrete examples of SavedModel of tf.train.Saver: If \"basically, when you want to save or restore your model\" isn't clear enough for you: The correct time to use it is any time it makes your life easier. To me, that looks like always. Especially if you're using Estimators, deploying in C++, or using command line serving.

\n\n

So that's my research on your question. Or four enumerated questions. Err, eight question marks. Hope this helps.

\n", + "system": "" + }, + { + "instruction": "What are the differences between all these cross-entropy losses in Keras and TensorFlow?", + "input": "", + "output": "

There is just one cross (Shannon) entropy defined as:

\n\n
H(P||Q) = - SUM_i P(X=i) log Q(X=i)\n
\n\n

In machine learning usage, P is the actual (ground truth) distribution, and Q is the predicted distribution. All the functions you listed are just helper functions which accepts different ways to represent P and Q.

\n\n

There are basically 3 main things to consider:

\n\n\n\n

Depending on these three aspects, different helper function should be used:

\n\n
                                  outcomes     what is in Q    targets in P   \n-------------------------------------------------------------------------------\nbinary CE                                2      probability         any\ncategorical CE                          >2      probability         soft\nsparse categorical CE                   >2      probability         hard\nsigmoid CE with logits                   2      score               any\nsoftmax CE with logits                  >2      score               soft\nsparse softmax CE with logits           >2      score               hard\n
\n\n

In the end one could just use \"categorical cross entropy\", as this is how it is mathematically defined, however since things like hard targets or binary classification are very popular - modern ML libraries do provide these additional helper functions to make things simpler. In particular \"stacking\" sigmoid and cross entropy might be numerically unstable, but if one knows these two operations are applied together - there is a numerically stable version of them combined (which is implemented in TF).

\n\n

It is important to notice that if you apply wrong helper function the code will usually still execute, but results will be wrong. For example if you apply softmax_* helper for binary classification with one output your network will be considered to always produce \"True\" at the output.

\n\n

As a final note - this answer considers classification, it is slightly different when you consider multi label case (when a single point can have multiple labels), as then Ps do not sum to 1, and one should use sigmoid_cross_entropy_with_logits despite having multiple output units.

\n", + "system": "" + }, + { + "instruction": "How to apply Drop Out in Tensorflow to improve the accuracy of neural network?", + "input": "", + "output": "

In the graph, I'd suggest to move keep_prob = tf.placeholder(tf.float32) outside of the model function to make it global.

\n\n
with graph.as_default():\n    ...\n    x = tf.placeholder(\"float\", [None, n_input])\n    y = tf.placeholder(\"float\", [None, n_classes])\n    keep_prob = tf.placeholder(tf.float32)\n\n    def model(x, weights_hiden, weights_out, biases_hidden, biases_out, keep_prob):\n        # hidden layer with RELU activation\n        layer_1 = tf.nn.relu(tf.add(tf.matmul(x, weights_hiden), biases_hidden))\n        # apply DropOut to hidden layer\n        drop_out = tf.nn.dropout(layer_1, keep_prob)  # DROP-OUT here\n        # output layer with linear activation\n        out_layer = tf.matmul(drop_out, weights_out) + biases_out\n        return out_layer\n    ...\n
\n\n

When running session, feed a desired keep_prob value during training time, and feed 1.0 to keep_prob during reference (validation and/or testing) time.

\n\n
# run the graph\nwith tf.Session(graph=graph) as sess:\n    tf.initialize_all_variables().run()\n    ...\n    for epoch in range(training_epochs):\n        ...\n        for i in range(total_batch):\n            batch_x = ...\n            batch_y = ...\n            # Run optimization op (backprop) and cost op (to get loss value)\n            # Feed a value < 1.0 for keep prob during training\n            _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y, keep_prob : 0.5})\n    ...\n    # Feed 1.0 for keep prob during testing\n    print(\"Test data accuracy:\", accuracy.eval({x: test_dataset, y: test_labels, keep_prob : 1.0}))\n    print(\"Valid data accuracy:\", accuracy.eval({x: valid_dataset, y: valid_labels, keep_prob : 1.0}))\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow get all variables in scope", + "input": "", + "output": "

I think you want tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='my_scope'). This will get all variables in a scope.

\n\n

To pass to an optimizer you do not want all variables you would just want the trainable variables. Those are also kept in a default collection, which is tf.GraphKeys.TRAINABLE_VARIABLES.

\n", + "system": "" + }, + { + "instruction": "How to prefetch data using a custom python function in tensorflow", + "input": "", + "output": "

This is a common use case, and most implementations use TensorFlow's queues to decouple the preprocessing code from the training code. There is a tutorial on how to use queues, but the main steps are as follows:

\n\n
    \n
  1. Define a queue, q, that will buffer the preprocessed data. TensorFlow supports the simple tf.FIFOQueue that produces elements in the order they were enqueued, and the more advanced tf.RandomShuffleQueue that produces elements in a random order. A queue element is a tuple of one or more tensors (which can have different types and shapes). All queues support single-element (enqueue, dequeue) and batch (enqueue_many, dequeue_many) operations, but to use the batch operations you must specify the shapes of each tensor in a queue element when constructing the queue.

  2. \n
  3. Build a subgraph that enqueues preprocessed elements into the queue. One way to do this would be to define some tf.placeholder() ops for tensors corresponding to a single input example, then pass them to q.enqueue(). (If your preprocessing produces a batch at once, you should use q.enqueue_many() instead.) You might also include TensorFlow ops in this subgraph.

  4. \n
  5. Build a subgraph that performs training. This will look like a regular TensorFlow graph, but will get its input by calling q.dequeue_many(BATCH_SIZE).

  6. \n
  7. Start your session.

  8. \n
  9. Create one or more threads that execute your preprocessing logic, then execute the enqueue op, feeding in the preprocessed data. You may find the tf.train.Coordinator and tf.train.QueueRunner utility classes useful for this.

  10. \n
  11. Run your training graph (optimizer, etc.) as normal.

  12. \n
\n\n

EDIT: Here's a simple load_and_enqueue() function and code fragment to get you started:

\n\n
# Features are length-100 vectors of floats\nfeature_input = tf.placeholder(tf.float32, shape=[100])\n# Labels are scalar integers.\nlabel_input = tf.placeholder(tf.int32, shape=[])\n\n# Alternatively, could do:\n# feature_batch_input = tf.placeholder(tf.float32, shape=[None, 100])\n# label_batch_input = tf.placeholder(tf.int32, shape=[None])\n\nq = tf.FIFOQueue(100, [tf.float32, tf.int32], shapes=[[100], []])\nenqueue_op = q.enqueue([feature_input, label_input])\n\n# For batch input, do:\n# enqueue_op = q.enqueue_many([feature_batch_input, label_batch_input])\n\nfeature_batch, label_batch = q.dequeue_many(BATCH_SIZE)\n# Build rest of model taking label_batch, feature_batch as input.\n# [...]\ntrain_op = ...\n\nsess = tf.Session()\n\ndef load_and_enqueue():\n  with open(...) as feature_file, open(...) as label_file:\n    while True:\n      feature_array = numpy.fromfile(feature_file, numpy.float32, 100)\n      if not feature_array:\n        return\n      label_value = numpy.fromfile(feature_file, numpy.int32, 1)[0]\n\n      sess.run(enqueue_op, feed_dict={feature_input: feature_array,\n                                      label_input: label_value})\n\n# Start a thread to enqueue data asynchronously, and hide I/O latency.\nt = threading.Thread(target=load_and_enqueue)\nt.start()\n\nfor _ in range(TRAINING_EPOCHS):\n  sess.run(train_op)\n
\n", + "system": "" + }, + { + "instruction": "Logging training and validation loss in tensorboard", + "input": "", + "output": "

There are several different ways you could achieve this, but you're on the right track with creating different tf.summary.scalar() nodes. Since you must explicitly call SummaryWriter.add_summary() each time you want to log a quantity to the event file, the simplest approach is probably to fetch the appropriate summary node each time you want to get the training or validation accuracy:

\n\n
accuracy = tf.reduce_mean(correct)\n\ntraining_summary = tf.summary.scalar(\"training_accuracy\", accuracy)\nvalidation_summary = tf.summary.scalar(\"validation_accuracy\", accuracy)\n\n\nsummary_writer = tf.summary.FileWriter(...)\n\nfor step in xrange(NUM_STEPS):\n\n  # Perform a training step....\n\n  if step % LOG_PERIOD == 0:\n\n    # To log training accuracy.\n    train_acc, train_summ = sess.run(\n        [accuracy, training_summary], \n        feed_dict={images : training_set.images, labels : training_set.labels})\n    writer.add_summary(train_summ, step) \n\n    # To log validation accuracy.\n    valid_acc, valid_summ = sess.run(\n        [accuracy, validation_summary],\n        feed_dict={images : validation_set.images, labels : validation_set.labels})\n    writer.add_summary(valid_summ, step)\n
\n\n

Alternatively, you could create a single summary op whose tag is a tf.placeholder(tf.string, []) and feed the string \"training_accuracy\" or \"validation_accuracy\" as appropriate.

\n", + "system": "" + }, + { + "instruction": "How do I get the current value of a Variable?", + "input": "", + "output": "

The only way to get the value of the variable is by running it in a session. In the FAQ it is written that:

\n\n
\n

A Tensor object is a symbolic handle to the result of an operation,\n but does not actually hold the values of the operation's output.

\n
\n\n

So TF equivalent would be:

\n\n
import tensorflow as tf\n\nx = tf.Variable([1.0, 2.0])\n\ninit = tf.global_variables_initializer()\n\nwith tf.Session() as sess:\n    sess.run(init)\n    v = sess.run(x)\n    print(v)  # will show you your variable.\n
\n\n

The part with init = global_variables_initializer() is important and should be done in order to initialize variables.

\n\n

Also, take a look at InteractiveSession if you work in IPython.

\n", + "system": "" + }, + { + "instruction": "Why do we name variables in Tensorflow?", + "input": "", + "output": "

The name parameter is optional (you can create variables and constants with or without it), and the variable you use in your program does not depend on it. Names can be helpful in a couple of places:

\n\n

When you want to save or restore your variables (you can save them to a binary file after the computation). From docs:

\n\n
\n

By default, it uses the value of the Variable.name property for each\n variable

\n
\n\n
matrix_1 = tf.Variable([[1, 2], [2, 3]], name=\"v1\")\nmatrix_2 = tf.Variable([[3, 4], [5, 6]], name=\"v2\")\ninit = tf.initialize_all_variables()\n\nsaver = tf.train.Saver()\n\nsess = tf.Session()\nsess.run(init)\nsave_path = saver.save(sess, \"/model.ckpt\")\nsess.close()\n
\n\n

Nonetheless you have variables matrix_1, matrix_2 they are saves as v1, v2 in the file.

\n\n

Also names are used in TensorBoard to nicely show names of edges. You can even group them by using the same scope:

\n\n
import tensorflow as tf\n\nwith tf.name_scope('hidden') as scope:\n  a = tf.constant(5, name='alpha')\n  W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0), name='weights')\n  b = tf.Variable(tf.zeros([1]), name='biases')\n
\n", + "system": "" + }, + { + "instruction": "tensorflow:Your input ran out of data", + "input": "", + "output": "

To make sure that you have "at least steps_per_epoch * epochs batches", set the steps_per_epoch to

\n
steps_per_epoch = len(X_train)//batch_size\n\nvalidation_steps = len(X_test)//batch_size # if you have validation data \n
\n

You can see the maximum number of batches that model.fit() can take by the progress bar when the training interrupts:

\n
5230/10000 [==============>...............] - ETA: 2:05:22 - loss: 0.0570\n
\n

Here, the maximum would be 5230 - 1

\n

Importantly, keep in mind that by default, batch_size is 32 in model.fit().

\n

If you're using a tf.data.Dataset, you can also add the repeat() method, but be careful: it will loop indefinitely (unless you specify a number).

\n", + "system": "" + }, + { + "instruction": "Install Tensorflow 2.0 in conda enviroment", + "input": "", + "output": "

TENSORFLOW 2.0 release version is out!

\n

Since 01/10/2019 I'm not talking beta but the release version.

\n

Using Anaconda

\n

Since 01/11/2019 Anaconda is supporting the Tensorflow 2.0.0.

\n

Option 1: For what the easiest way is just:

\n

conda install tensorflow or conda install tensorflow-gpu

\n

For the gpu mode, anaconda will take care of all the CUDA everything you need to install for the tensorflow gpu mode to work so I strongly recommend using this method.

\n

The only issue with this method is that anaconda might not have the last last version of TensorFlow. For example, at Feb 21 2021, conda has the version 2.3 whereas the PIP version is 2.4. You can check the current version of gpu or cpu.

\n

Option 2 (virtual env): It is strongly recommended to use an environment on where to install tensorflow, for which you need the following command that will create an environment first and then install tensorflow within:

\n\n

Change <your_env_name> by a meaningful name like tf-2

\n

To use tensorflow run first conda activate <your_env_name>

\n

Using pip

\n

Using pip the tensorflow official instructions are quite complete.

\n

Just install tensorflow using pip like:

\n
# Current stable release for CPU-only\npip install tensorflow\n
\n

I yet recommend before doing everything to install tensorflow in a new environment so the 3 steps would be (with anaconda):

\n
conda create --n <our_env_name> pip\nconda activate <your_env_name>\npip install tensorflow\n
\n

Now for the GPU version it's harder with pip, I recommend you this link that explains the extra things you need to install (CUDA and others).

\n", + "system": "" + }, + { + "instruction": "What's the purpose of keras.backend.function()", + "input": "", + "output": "

I have the following understanding of this function keras.backend.function. I will explain it with the help of a code snippet from this.

\n

The part of code snippet is as follows

\n
final_conv_layer = get_output_layer(model, "conv5_3")\nget_output = K.function([model.layers[0].input], \n                        [final_conv_layer.output, model.layers[-1].output])\n[conv_outputs, predictions] = get_output([img])\n    \n
\n

In this code, there is a model from which conv5_3 layer is extracted (line 1). In the function K.function(), the first argument is input to this model and second is set of 2 outputs - one for convolution and second for softmax output at the last layer.

\n

As per the Keras/Tensorflow manual, this function runs the computation graph that we have created in the code, taking input from the first parameter and extracting the number of outputs as per the layers mentioned in the second parameter. Thus, conv_outputs are output of final_conv_layer and predictions are output of model.layers[-1], i.e. the last layer of the model.

\n", + "system": "" + }, + { + "instruction": "What is y_true and y_pred when creating a custom metric in Keras?", + "input": "", + "output": "

y_true and y_pred

\n

The tensor y_true is the true data (or target, ground truth) you pass to the fit method.
\nIt's a conversion of the numpy array y_train into a tensor.

\n

The tensor y_pred is the data predicted (calculated, output) by your model.

\n

Usually, both y_true and y_pred have exactly the same shape. A few of the losses, such as the sparse ones, may accept them with different shapes.

\n
\n

The shape of y_true

\n

It contains an entire batch. Its first dimension is always the batch size, and it must exist, even if the batch has only one element.

\n

Two very easy ways to find the shape of y_true are:

\n\n

But its first dimension will be the batch size.

\n

So, if your last layer outputs (None, 1), the shape of y_true is (batch, 1). If the last layer outputs (None, 200,200, 3), then y_true will be (batch, 200,200,3).

\n
\n

Custom metrics and loss functions

\n

Unfotunately, printing custom metrics will not reveal their content (unless you are using eager mode on, and you have calculated every step of the model with data).
\nYou can see their shapes with print(K.int_shape(y_pred)), for instance.

\n

Remember that these libraries first "compile a graph", then later "runs it with data". When you define your loss, you're in the compile phase, and asking for data needs the model to run.

\n

But even if the result of your metric is multidimensional, keras will automatically find ways to output a single scalar for that metric. (Not sure what is the operation, but very probably a K.mean() hidden under the table - it's interesting to return the entire batch, so Keras applies other operations such as sample weights, for instance).

\n

Sources. After you get used to keras, this understanding gets natural from simply reading this part:

\n
\n

y_true: True labels. Theano/TensorFlow tensor.
\ny_pred: Predictions. Theano/TensorFlow tensor of the same shape as y_true.

\n
\n

True labels mean true/target data. Labels is a badly chosen word here, it is only really "labels" in classification models.
\nPredictions mean the results of your model.

\n", + "system": "" + }, + { + "instruction": "How to average summaries over multiple batches?", + "input": "", + "output": "

Do the averaging of your measure in Python and create a new Summary object for each mean. Here is what I do:

\n\n
accuracies = []\n\n# Calculate your measure over as many batches as you need\nfor batch in validation_set:\n  accuracies.append(sess.run([training_op]))\n\n# Take the mean of you measure\naccuracy = np.mean(accuracies)\n\n# Create a new Summary object with your measure\nsummary = tf.Summary()\nsummary.value.add(tag=\"%sAccuracy\" % prefix, simple_value=accuracy)\n\n# Add it to the Tensorboard summary writer\n# Make sure to specify a step parameter to get nice graphs over time\nsummary_writer.add_summary(summary, global_step)\n
\n", + "system": "" + }, + { + "instruction": "IOError: [Errno 28] No space left on device while installing TensorFlow", + "input": "", + "output": "

Usually, You can set the environment variable 'TMPDIR' to use a different directory other than /tmp or /var/tmp and most programs will honour that.

\n\n

You can perhaps try,

\n\n
\n

$ export TMPDIR=$HOME/tmp

\n
\n\n

and then start your 'pip install'

\n", + "system": "" + }, + { + "instruction": "Keras uses way too much GPU memory when calling train_on_batch, fit, etc", + "input": "", + "output": "

It is a very common mistake to forget that the activations, gradients and optimizer moment tracking variables also take VRRAM, not just the parameters, increasing memory usage quite a bit. The backprob calculations themselves make it so the training phase takes almost double the VRAM of forward / inference use of the neural net, and the Adam optimizer triples the space usage.

\n

So, in the beginning when the network is created, only the parameters are allocated. However, when the training starts. the model actiavtions, backprop computations and the optimizer's tracking variables get allocated, increasing memory use by a large factor.

\n

To allow the training of larger models, people:

\n\n

Tools to train very large models:

\n\n", + "system": "" + }, + { + "instruction": "Implementing contrastive loss and triplet loss in Tensorflow", + "input": "", + "output": "

Update (2018/03/19): I wrote a blog post detailing how to implement triplet loss in TensorFlow.

\n\n
\n\n

You need to implement yourself the contrastive loss or the triplet loss, but once you know the pairs or triplets this is quite easy.

\n\n
\n\n

Contrastive Loss

\n\n

Suppose you have as input the pairs of data and their label (positive or negative, i.e. same class or different class). For instance you have images as input of size 28x28x1:

\n\n
left = tf.placeholder(tf.float32, [None, 28, 28, 1])\nright = tf.placeholder(tf.float32, [None, 28, 28, 1])\nlabel = tf.placeholder(tf.int32, [None, 1]). # 0 if same, 1 if different\nmargin = 0.2\n\nleft_output = model(left)  # shape [None, 128]\nright_output = model(right)  # shape [None, 128]\n\nd = tf.reduce_sum(tf.square(left_output - right_output), 1)\nd_sqrt = tf.sqrt(d)\n\nloss = label * tf.square(tf.maximum(0., margin - d_sqrt)) + (1 - label) * d\n\nloss = 0.5 * tf.reduce_mean(loss)\n
\n\n
\n\n

Triplet Loss

\n\n

Same as with contrastive loss, but with triplets (anchor, positive, negative). You don't need labels here.

\n\n
anchor_output = ...  # shape [None, 128]\npositive_output = ...  # shape [None, 128]\nnegative_output = ...  # shape [None, 128]\n\nd_pos = tf.reduce_sum(tf.square(anchor_output - positive_output), 1)\nd_neg = tf.reduce_sum(tf.square(anchor_output - negative_output), 1)\n\nloss = tf.maximum(0., margin + d_pos - d_neg)\nloss = tf.reduce_mean(loss)\n
\n\n
\n\n

The real trouble when implementing triplet loss or contrastive loss in TensorFlow is how to sample the triplets or pairs. I will focus on generating triplets because it is harder than generating pairs.

\n\n

The easiest way is to generate them outside of the Tensorflow graph, i.e. in python and feed them to the network through the placeholders. Basically you select images 3 at a time, with the first two from the same class and the third from another class. We then perform a feedforward on these triplets, and compute the triplet loss.

\n\n

The issue here is that generating triplets is complicated. We want them to be valid triplets, triplets with a positive loss (otherwise the loss is 0 and the network doesn't learn).
\nTo know whether a triplet is good or not you need to compute its loss, so you already make one feedforward through the network...

\n\n

Clearly, implementing triplet loss in Tensorflow is hard, and there are ways to make it more efficient than sampling in python but explaining them would require a whole blog post !

\n", + "system": "" + }, + { + "instruction": "How to understand the term `tensor` in TensorFlow?", + "input": "", + "output": "

TensorFlow doesn't have first-class Tensor objects, meaning that there are no notion of Tensor in the underlying graph that's executed by the runtime. Instead the graph consists of op nodes connected to each other, representing operations. An operation allocates memory for its outputs, which are available on endpoints :0, :1, etc, and you can think of each of these endpoints as a Tensor. If you have tensor corresponding to nodename:0 you can fetch its value as sess.run(tensor) or sess.run('nodename:0'). Execution granularity happens at operation level, so the run method will execute op which will compute all of the endpoints, not just the :0 endpoint. It's possible to have an Op node with no outputs (like tf.group) in which case there are no tensors associated with it. It is not possible to have tensors without an underlying Op node.

\n\n

You can examine what happens in underlying graph by doing something like this

\n\n
tf.reset_default_graph()\nvalue = tf.constant(1)\nprint(tf.get_default_graph().as_graph_def())\n
\n\n

So with tf.constant you get a single operation node, and you can fetch it using sess.run(\"Const:0\") or sess.run(value)

\n\n

Similarly, value=tf.placeholder(tf.int32) creates a regular node with name Placeholder, and you could feed it as feed_dict={\"Placeholder:0\":2} or feed_dict={value:2}. You can not feed and fetch a placeholder in the same session.run call, but you can see the result by attaching a tf.identity node on top and fetching that.

\n\n

For variable

\n\n
tf.reset_default_graph()\nvalue = tf.Variable(tf.ones_initializer()(()))\nvalue2 = value+3\nprint(tf.get_default_graph().as_graph_def())\n
\n\n

You'll see that it creates two nodes Variable and Variable/read, the :0 endpoint is a valid value to fetch on both of these nodes. However Variable:0 has a special ref type meaning it can be used as an input to mutating operations. The result of Python call tf.Variable is a Python Variable object and there's some Python magic to substitute Variable/read:0 or Variable:0 depending on whether mutation is necessary. Since most ops have only 1 endpoint, :0 is dropped. Another example is Queue -- close() method will create a new Close op node which connects to Queue op. To summarize -- operations on python objects like Variable and Queue map to different underlying TensorFlow op nodes depending on usage.

\n\n

For ops like tf.split or tf.nn.top_k which create nodes with multiple endpoints, Python's session.run call automatically wraps output in tuple or collections.namedtuple of Tensor objects which can be fetched individually.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: restoring a graph and model then running evaluation on a single image", + "input": "", + "output": "

There are two methods to feed a single new image to the cifar10 model. The first method is a cleaner approach but requires modification in the main file, hence will require retraining. The second method is applicable when a user does not want to modify the model files and instead wants to use the existing check-point/meta-graph files.

\n\n

The code for the first approach is as follows:

\n\n
import tensorflow as tf\nimport numpy as np\nimport cv2\n\nsess = tf.Session('', tf.Graph())\nwith sess.graph.as_default():\n    # Read meta graph and checkpoint to restore tf session\n    saver = tf.train.import_meta_graph(\"/tmp/cifar10_train/model.ckpt-200.meta\")\n    saver.restore(sess, \"/tmp/cifar10_train/model.ckpt-200\")\n\n    # Read a single image from a file.\n    img = cv2.imread('tmp.png')\n    img = np.expand_dims(img, axis=0)\n\n    # Start the queue runners. If they are not started the program will hang\n    # see e.g. https://www.tensorflow.org/programmers_guide/reading_data\n    coord = tf.train.Coordinator()\n    threads = []\n    for qr in sess.graph.get_collection(tf.GraphKeys.QUEUE_RUNNERS):\n        threads.extend(qr.create_threads(sess, coord=coord, daemon=True,\n                                         start=True))\n\n    # In the graph created above, feed \"is_training\" and \"imgs\" placeholders.\n    # Feeding them will disconnect the path from queue runners to the graph \n    # and enable a path from the placeholder instead. The \"img\" placeholder will be \n    # fed with the image that was read above.\n    logits = sess.run('softmax_linear/softmax_linear:0', \n                     feed_dict={'is_training:0': False, 'imgs:0': img})\n\n    #Print classifiction results.\n    print(logits) \n
\n\n

The script requires that a user creates two placeholders and a conditional execution statement for it to work.

\n\n

The placeholders and conditional execution statement are added in cifar10_train.py as shown below:

\n\n
def train():   \n\"\"\"Train CIFAR-10 for a number of steps.\"\"\"   \n    with tf.Graph().as_default():\n        global_step = tf.contrib.framework.get_or_create_global_step()\n\n    with tf.device('/cpu:0'):\n        images, labels = cifar10.distorted_inputs()\n\n    is_training = tf.placeholder(dtype=bool,shape=(),name='is_training')\n    imgs = tf.placeholder(tf.float32, (1, 32, 32, 3), name='imgs')\n    images = tf.cond(is_training, lambda:images, lambda:imgs)\n    logits = cifar10.inference(images)\n
\n\n

The inputs in cifar10 model are connected to queue runner object which is a multistage queue that can prefetch data from files in parallel. See a nice animation of queue runner here

\n\n

While queue runners are efficient in prefetching large dataset for training, they are an overkill for inference/testing where only a single file is needed to be classified, also they are a bit more involved to modify/maintain.\nFor that reason, I have added a placeholder \"is_training\", which is set to False while training as shown below:

\n\n
 import numpy as np\n tmp_img = np.ndarray(shape=(1,32,32,3), dtype=float)\n with tf.train.MonitoredTrainingSession(\n     checkpoint_dir=FLAGS.train_dir,\n     hooks=[tf.train.StopAtStepHook(last_step=FLAGS.max_steps),\n            tf.train.NanTensorHook(loss),\n            _LoggerHook()],\n     config=tf.ConfigProto(\n         log_device_placement=FLAGS.log_device_placement)) as mon_sess:\n   while not mon_sess.should_stop():\n     mon_sess.run(train_op, feed_dict={is_training: True, imgs: tmp_img})\n
\n\n

Another placeholder \"imgs\" holds a tensor of shape (1,32,32,3) for the image that will be fed during inference -- the first dimension is the batch size which is one in this case. I have modified cifar model to accept 32x32 images instead of 24x24 as the original cifar10 images are 32x32.

\n\n

Finally, the conditional statement feeds the placeholder or queue runner output to the graph. The \"is_training\" placeholder is set to False during inference and \"img\" placeholder is fed a numpy array -- the numpy array is reshaped from 3 to 4 dimensional vector to conform to the input tensor to inference function in the model.

\n\n

That is all there is to it. Any model can be inferred with a single/user defined test data like shown in the script above. Essentially read the graph, feed data to the graph nodes and run the graph to get the final output.

\n\n

Now the second method. The other approach is to hack cifar10.py and cifar10_eval.py to change batch size to one and replace the data coming from the queue runner with the one read from a file.

\n\n

Set batch size to 1:

\n\n
tf.app.flags.DEFINE_integer('batch_size', 1,\n                             \"\"\"Number of images to process in a batch.\"\"\")\n
\n\n

Call inference with an image file read.

\n\n
def evaluate():   with tf.Graph().as_default() as g:\n    # Get images and labels for CIFAR-10.\n    eval_data = FLAGS.eval_data == 'test'\n    images, labels = cifar10.inputs(eval_data=eval_data)\n    import cv2\n    img = cv2.imread('tmp.png')\n    img = np.expand_dims(img, axis=0)\n    img = tf.cast(img, tf.float32)\n\n    logits = cifar10.inference(img)\n
\n\n

Then pass logits to eval_once and modify eval once to evaluate logits:

\n\n
def eval_once(saver, summary_writer, top_k_op, logits, summary_op): \n    ...\n    while step < num_iter and not coord.should_stop():\n        predictions = sess.run([top_k_op])\n        print(sess.run(logits))\n
\n\n

There is no separate script to run this method of inference, just run cifar10_eval.py which will now read a file from the user defined location with a batch size of one.

\n", + "system": "" + }, + { + "instruction": "How to create only one copy of graph in tensorboard events file with custom tf.Estimator?", + "input": "", + "output": "

You need to use the TensorBoard tool for visualizing the contents of your summary logs.

\n

The event file log can be read and use it.\nYou can see the example from this link provides information about how to read events written to an event file.

\n
# This example supposes that the events file contains summaries with a\n# summary value tag 'loss'.  These could have been added by calling\n# `add_summary()`, passing the output of a scalar summary op created with\n# with: `tf.compat.v1.summary.scalar('loss', loss_tensor)`.\nfor e in tf.compat.v1.train.summary_iterator(path to events file):\n    for v in e.summary.value:\n        if v.tag == 'loss':\n            print(v.simple_value)\n
\n", + "system": "" + }, + { + "instruction": "Numpy is installed but still getting error", + "input": "", + "output": "

Run:

\n
pip3 uninstall numpy\n
\n

Until you receive a message stating no files available with numpy to uninstall and then you can freshly install numpy using

\n
pip install numpy\n
\n

And that will fix the issue.

\n", + "system": "" + }, + { + "instruction": "Is there an easy way to get something like Keras model.summary in Tensorflow?", + "input": "", + "output": "

Looks like you can use Slim

\n\n

Example:

\n\n
import numpy as np\n\nfrom tensorflow.python.layers import base\nimport tensorflow as tf\nimport tensorflow.contrib.slim as slim\n\nx = np.zeros((1,4,4,3))\nx_tf = tf.convert_to_tensor(x, np.float32)\nz_tf = tf.layers.conv2d(x_tf, filters=32, kernel_size=(3,3))\n\ndef model_summary():\n    model_vars = tf.trainable_variables()\n    slim.model_analyzer.analyze_vars(model_vars, print_info=True)\n\nmodel_summary()\n
\n\n

Output:

\n\n
---------\nVariables: name (type shape) [size]\n---------\nconv2d/kernel:0 (float32_ref 3x3x3x32) [864, bytes: 3456]\nconv2d/bias:0 (float32_ref 32) [32, bytes: 128]\nTotal size of variables: 896\nTotal bytes of variables: 3584\n
\n\n

Also here is an example of custom function to print model summary:\nhttps://github.com/NVlabs/stylegan/blob/f3a044621e2ab802d40940c16cc86042ae87e100/dnnlib/tflib/network.py#L507

\n\n

If you already have .pb tensorflow model you can use: inspect_pb.py to print model info or use tensorflow summarize_graph tool with --print_structure flag, also it's nice that it can detect input and output names.

\n", + "system": "" + }, + { + "instruction": "Adam optimizer goes haywire after 200k batches, training loss grows", + "input": "", + "output": "

Yes. This is a known problem of Adam.

\n\n

The equations for Adam are

\n\n
t <- t + 1\nlr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t)\n\nm_t <- beta1 * m_{t-1} + (1 - beta1) * g\nv_t <- beta2 * v_{t-1} + (1 - beta2) * g * g\nvariable <- variable - lr_t * m_t / (sqrt(v_t) + epsilon)\n
\n\n

where m is an exponential moving average of the mean gradient and v is an exponential moving average of the squares of the gradients. The problem is that when you have been training for a long time, and are close to the optimal, then v can become very small. If then all of a sudden the gradients starts increasing again it will be divided by a very small number and explode.

\n\n

By default beta1=0.9 and beta2=0.999. So m changes much more quickly than v. So m can start being big again while v is still small and cannot catch up.

\n\n

To remedy to this problem you can increase epsilon which is 10-8 by default. Thus stopping the problem of dividing almost by 0. \nDepending on your network, a value of epsilon in 0.1, 0.01, or 0.001 might be good.

\n", + "system": "" + }, + { + "instruction": "TensorFlow 'module' object has no attribute 'global_variables_initializer'", + "input": "", + "output": "

In older versions, it was called tf.initialize_all_variables.

\n", + "system": "" + }, + { + "instruction": "How can I make tensorflow run on a GPU with capability 2.x?", + "input": "", + "output": "

Recent GPU versions of tensorflow require compute capability 3.5 or higher (and use cuDNN to access the GPU.

\n\n

cuDNN also requires a GPU of cc3.0 or higher:

\n\n
\n

cuDNN is supported on Windows, Linux and MacOS systems with Pascal, Kepler, Maxwell, Tegra K1 or Tegra X1 GPUs.

\n
\n\n\n\n

Fermi GPUs (cc2.0, cc2.1) are not supported by cuDNN.

\n\n

Older GPUs (e.g. compute capability 1.x) are also not supported by cuDNN.

\n\n

Note that there has never been either a version of cuDNN or any version of TF that officially supported NVIDIA GPUs less than cc3.0. The initial version of cuDNN started out by requiring cc3.0 GPUs, and the initial version of TF started out by requiring cc3.0 GPUs.

\n", + "system": "" + }, + { + "instruction": "How do I find the variable names and values that are saved in a checkpoint?", + "input": "", + "output": "

Example usage:

\n\n
from tensorflow.python.tools.inspect_checkpoint import print_tensors_in_checkpoint_file\nimport os\ncheckpoint_path = os.path.join(model_dir, \"model.ckpt\")\n\n# List ALL tensors example output: v0/Adam (DT_FLOAT) [3,3,1,80]\nprint_tensors_in_checkpoint_file(file_name=checkpoint_path, tensor_name='')\n\n# List contents of v0 tensor.\n# Example output: tensor_name:  v0 [[[[  9.27958265e-02   7.40226209e-02   4.52989563e-02   3.15700471e-02\nprint_tensors_in_checkpoint_file(file_name=checkpoint_path, tensor_name='v0')\n\n# List contents of v1 tensor.\nprint_tensors_in_checkpoint_file(file_name=checkpoint_path, tensor_name='v1')\n
\n\n

Update: all_tensors argument was added to print_tensors_in_checkpoint_file since Tensorflow 0.12.0-rc0 so you may need to add all_tensors=False or all_tensors=True if required.

\n\n

Alternative method:

\n\n
from tensorflow.python import pywrap_tensorflow\nimport os\n\ncheckpoint_path = os.path.join(model_dir, \"model.ckpt\")\nreader = pywrap_tensorflow.NewCheckpointReader(checkpoint_path)\nvar_to_shape_map = reader.get_variable_to_shape_map()\n\nfor key in var_to_shape_map:\n    print(\"tensor_name: \", key)\n    print(reader.get_tensor(key)) # Remove this is you want to print only variable names\n
\n\n

Hope it helps.

\n", + "system": "" + }, + { + "instruction": "Getting the current learning rate from a tf.train.AdamOptimizer", + "input": "", + "output": "

All the optimizers have a private variable that holds the value of a learning rate.

\n\n

In adagrad and gradient descent it is called self._learning_rate. In adam it is self._lr.

\n\n

So you will just need to print sess.run(optimizer._lr) to get this value. Sess.run is needed because they are tensors.

\n", + "system": "" + }, + { + "instruction": "TensorFlow: getting variable by name", + "input": "", + "output": "

The get_variable() function creates a new variable or returns one created earlier by get_variable(). It won't return a variable created using tf.Variable(). Here's a quick example:

\n\n
>>> with tf.variable_scope(\"foo\"):\n...   bar1 = tf.get_variable(\"bar\", (2,3)) # create\n... \n>>> with tf.variable_scope(\"foo\", reuse=True):\n...   bar2 = tf.get_variable(\"bar\")  # reuse\n... \n\n>>> with tf.variable_scope(\"\", reuse=True): # root variable scope\n...   bar3 = tf.get_variable(\"foo/bar\") # reuse (equivalent to the above)\n... \n>>> (bar1 is bar2) and (bar2 is bar3)\nTrue\n
\n\n

If you did not create the variable using tf.get_variable(), you have a couple options. First, you can use tf.global_variables() (as @mrry suggests):

\n\n
>>> bar1 = tf.Variable(0.0, name=\"bar\")\n>>> bar2 = [var for var in tf.global_variables() if var.op.name==\"bar\"][0]\n>>> bar1 is bar2\nTrue\n
\n\n

Or you can use tf.get_collection() like so:

\n\n
>>> bar1 = tf.Variable(0.0, name=\"bar\")\n>>> bar2 = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=\"bar\")[0]\n>>> bar1 is bar2\nTrue\n
\n\n

Edit

\n\n

You can also use get_tensor_by_name():

\n\n
>>> bar1 = tf.Variable(0.0, name=\"bar\")\n>>> graph = tf.get_default_graph()\n>>> bar2 = graph.get_tensor_by_name(\"bar:0\")\n>>> bar1 is bar2\nFalse, bar2 is a Tensor througn convert_to_tensor on bar1. but bar1 equal \nbar2 in value.\n
\n\n

Recall that a tensor is the output of an operation. It has the same name as the operation, plus :0. If the operation has multiple outputs, they have the same name as the operation plus :0, :1, :2, and so on.

\n", + "system": "" + }, + { + "instruction": "import input_data MNIST tensorflow not working", + "input": "", + "output": "

So let's assume that you are in the directory: /somePath/tensorflow/tutorial (and this is your working directory).

\n

All you need to do is to download the input_data.py file and place it like this. Let's assume that the file name you invoke:

\n
import input_data\nmnist = input_data.read_data_sets("MNIST_data/", one_hot=True)\n...\n
\n

is main.py and it is also in the same directory.

\n

Once this is done, you can just start running main.py which will start downloading the files and will put them in the MNIST_data folder (once they are there the script will not be downloading them next time).

\n", + "system": "" + }, + { + "instruction": "Error in python after 'import tensorflow': TypeError: __init__() got an unexpected keyword argument 'syntax'", + "input": "", + "output": "

Several users have reported issues that arise when an older version of protobuf is installed.

\n\n

TensorFlow requires (and uses a copy of) protobuf-3.0.0a4. However it seems to be conflicting with your installed version (perhaps due to how the PYTHONPATH environment variable is configured on your system?). Can you try uninstalling the python-protobuf package if it is installed, and then seeing if TensorFlow starts correctly?

\n\n

Some users have had success when using a virtualenv. For instructions on installing TensorFlow into a virtualenv, see the getting started guide.

\n", + "system": "" + }, + { + "instruction": "ValueError: Layer sequential_20 expects 1 inputs, but it received 2 input tensors", + "input": "", + "output": "

it helped me when I changed:
\nvalidation_data=[X_val, y_val] into validation_data=(X_val, y_val)
\nActually still wonder why?

\n", + "system": "" + }, + { + "instruction": "Confusion about keras Model: __call__ vs. call vs. predict methods", + "input": "", + "output": "

Adding to @Dmitry Kabanov, they are similar yet they aren't exactly the same thing. If you care about performance, need to look in to critical differences between them.

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
model.predictmodel(x)
loops over the data in batches which means means that predict() calls can scale to very large arrays.happens in-memory and doesn't scale
not differentiabledifferentiable
use this if you just need the output valueuse this when you need to retrieve the gradients
Output is NumPy valueOutput is a Tensor
use this if you have batches of data to be predicteduse this for small dataset
relatively slower for small datarelatively faster for small data
\n
\n

Please check more detailed explanation in Keras FAQs

\n", + "system": "" + }, + { + "instruction": "Optimizing subgraph of large graph - slower than optimizing subgraph by itself", + "input": "", + "output": "

I am guessing that this is a generative adversarial network given by the relation between the losses and the parameters. It seems that the first group of parameters are the generative model and the second group make up the detector model.

\n

If my guesses are correct, then that would mean that the second model is using the output of the first model as its input. Admittedly, I am much more informed about PyTorch than TF. There is a comment which I believe is saying that the first model could be included in the second graph. I also think this is true. I would implement something similar to the following. The most important part is just creating a copy of the generated_tensor with no graph:

\n
// An arbitrary label\nlabel = torch.Tensor(1.0)\n\n// Treat GenerativeModel as the model with the first list of Variables/parameters\ngenerated_tensor = GenerativeModel(random_input_tensor)\n// Treat DetectorModel as the model with the second list of Variables/parameters\ndetector_prediction = DetectorModel(generated_tensor)\n\ngenerated_tensor_copy = torch.tensor(generated_tensor, requires_grad=False)\ndetector_prediction_copy = DetectorModel(generated_tensor_copy)\n\n//This is for optimizing the first model, but it has the second model in its graph\n// which is necessary.\nloss1 = loss_func1(detector_prediction, label)\n// This is for optimizing the second model. It will not have the first model in its graph\nloss2 = loss_func2(detector_prediction_copy, label)\n
\n

I hope this is helpful. If anyone knows how to do this in TF, that would probably be very invaluable.

\n", + "system": "" + }, + { + "instruction": "What is difference frozen_inference_graph.pb and saved_model.pb?", + "input": "", + "output": "

frozen_inference_graph.pb, is a frozen graph that cannot be trained anymore, it defines the graphdef and is actually a serialized graph and can be loaded with this code:

\n
def load_graph(frozen_graph_filename):\n    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:\n        graph_def = tf.GraphDef()\n        graph_def.ParseFromString(f.read())\n        return graph_def\ntf.import_graph_def(load_graph("frozen_inference_graph.pb"))\n
\n

the saved model is a model generated by tf.saved_model.builder and is has to be imported into a session, this file contains the full graph with all training weights (just like the frozen graph) but here can be trained upon, and this one is not serialized and needs to be loaded by this snippet. The [] are tagconstants which can be read by the saved_model_cli. This model is also often served to predict on, like google ml engine par example:

\n
with tf.Session() as sess:\n    tf.saved_model.loader.load(sess, [], "foldername to saved_model.pb, only folder")\n
\n

model.ckpt files are checkpoints, generated during training, this is used to resume training or to have a back up when something goes wrong after along training. If you have a saved model and a frozen graph, then you can ignore this.

\n

.pbtxt files are basically the same as previous discussed models, but then human readable, not binary. These can be ignored as well.

\n

To answer your conversion question:\nsaved models can be transformed into a frozen graph and vice versa, although a saved_model extracted from a frozen graph is also no trainable, but the way it is stored is in saved model format. Checkpoints can be read in and loaded into a session, and there you can build a saved model from them.

\n

Hope I helped, any questions, ask away!

\n

ADDITION:

\n

How to freeze a graph, starting from a saved model folder structure.\nThis post is old, so the method I used before might not work anymore, it will most likely still work with Tensorflow 1.+.

\n

Start of by downloading this file from the tensorflow library, and then this code snippit should do the trick:

\n
    import freeze_graph # the file you just downloaded\n    from tensorflow.python.saved_model import tag_constants # might be unnecessary\n\n    freeze_graph.freeze_graph(\n        input_graph=None,\n        input_saver=None,\n        input_binary=None,\n        input_checkpoint=None,\n        output_node_names="dense_output/BiasAdd",\n        restore_op_name=None,\n        filename_tensor_name=None,\n        output_graph=os.path.join(path, "frozen_graph.pb"),\n        clear_devices=None,\n        initializer_nodes=None,\n        input_saved_model_dir=path,\n        saved_model_tags=tag_constants.SERVING\n    )\n
\n

output_node_names = Node name of the final operation, if you end on a dense layer, it will be dense layer_name/BiasAdd

\n

output_graph = output graph name

\n

input_saved_model_dir = root folder of the saved model

\n

saved_model_tags = saved model tags, in your case this can be None, I did however use a tag.

\n

ANOTHER ADDITION:

\n

The code to load models is already provided above. To actually predict you need a session, for a saved model this session is already created, for a frozen model, it's not.

\n

saved model:

\n
with tf.Session() as sess:\n    tf.saved_model.loader.load(sess, [], "foldername to saved_model.pb, only folder")\n    prediction = sess.run(output_tensor, feed_dict={input_tensor: test_images})\n
\n

Frozen model:

\n
tf.import_graph_def(load_graph("frozen_inference_graph.pb"))\nwith tf.Session() as sess:\n    prediction = sess.run(output_tensor, feed_dict={input_tensor: test_images})\n
\n

To further understand what your input and output layers are, you need to check them out with tensorboard, simply add the following line of code into your session:

\n
tf.summary.FileWriter("path/to/folder/to/save/logs", sess.graph)\n
\n

This line will create a log file that you can open with the cli/powershell, to see how to run tensorboard, check out this previously posted question

\n", + "system": "" + }, + { + "instruction": "Multivariate LSTM with missing values", + "input": "", + "output": "

As suggested by Fran\u00e7ois Chollet (creator of Keras) in his book, one way to handle missing values is to replace them with zero:

\n
\n

In general, with neural networks, it\u2019s safe to input missing values as\n0, with the condition that 0 isn\u2019t already a meaningful value. The\nnetwork will learn from exposure to the data that the value 0 means\nmissing data and will start ignoring the value. Note that if you\u2019re\nexpecting missing values in the test data, but the network was trained\non data without any missing values, the network won\u2019t have learned to\nignore missing values! In this situation, you should artificially\ngenerate training samples with missing entries: copy some training\nsamples several times, and drop some of the features that you expect\nare likely to be missing in the test data.

\n
\n

So you can assign zero to NaN elements, considering that zero is not used in your data (you can normalize the data to a range, say [1,2], and then assign zero to NaN elements; or alternatively, you can normalize all the values to be in range [0,1] and then use -1 instead of zero to replace NaN elements.)

\n

Another alternative way is to use a Masking layer in Keras. You give it a mask value, say 0, and it would drop any timestep (i.e. row) where all its features are equal to the mask value. However, all the following layers should support masking and you also need to pre-process your data and assign the mask value to all the features of a timestep which includes one or more NaN features. Example from Keras doc:

\n
\n

Consider a Numpy data array x of shape (samples, timesteps,features),\nto be fed to an LSTM layer. You want to mask timestep #3\nand #5 because you lack data for these timesteps. You can:

\n\n\n
model = Sequential()\nmodel.add(Masking(mask_value=0., input_shape=(timesteps, features)))\nmodel.add(LSTM(32))\n
\n
\n
\n

Update (May 2021): According to an updated suggestion from Fran\u00e7ois Cholle, it might be better to use a more meaningful or informative value (instead of using zero) for masking missing values. This value could be computed (e.g. mean, median, etc.) or predicted from the data itself.

\n", + "system": "" + }, + { + "instruction": "Illegal instruction (core dumped) after running import tensorflow", + "input": "", + "output": "

I would use older version. Looks like your CPU does not support AVX instructions.

\n\n

Quoting from their Release Page

\n\n
Breaking Changes\nPrebuilt binaries are now built against CUDA 9.0 and cuDNN 7.\nPrebuilt binaries will use AVX instructions. This may break TF on older CPUs.\n
\n\n

You have atleast two options:

\n\n
    \n
  1. Use tensorflow 1.5 or older

  2. \n
  3. Build from source

  4. \n
\n\n

Regarding your concern for differences, you will miss out on new features, but most basic features and documentations are not that different.

\n", + "system": "" + }, + { + "instruction": "Plot multiple graphs in one plot using Tensorboard", + "input": "", + "output": "

If you are using the SummaryWriter from tensorboardX or pytorch 1.2, you have a method called add_scalars:

\n\n

Call it like this:

\n\n
my_summary_writer.add_scalars(f'loss/check_info', {\n    'score': score[iteration],\n    'score_nf': score_nf[iteration],\n}, iteration)\n
\n\n

And it will show up like this:

\n\n

\"tensorboard

\n\n
\n\n

Be careful that add_scalars will mess with the organisation of your runs: it will add mutliple entries to this list (and thus create confusion):

\n\n

\"tensorboard

\n\n

I would recommend that instead you just do:

\n\n
my_summary_writer.add_scalar(f'check_info/score',    score[iter],    iter)\nmy_summary_writer.add_scalar(f'check_info/score_nf', score_nf[iter], iter)\n
\n", + "system": "" + }, + { + "instruction": "keras tensorboard: plot train and validation scalars in a same figure", + "input": "", + "output": "

To handle the validation logs with a separate writer, you can write a custom callback that wraps around the original TensorBoard methods.

\n\n\n\n
import os\nimport tensorflow as tf\nfrom keras.callbacks import TensorBoard\n\nclass TrainValTensorBoard(TensorBoard):\n    def __init__(self, log_dir='./logs', **kwargs):\n        # Make the original `TensorBoard` log to a subdirectory 'training'\n        training_log_dir = os.path.join(log_dir, 'training')\n        super(TrainValTensorBoard, self).__init__(training_log_dir, **kwargs)\n\n        # Log the validation metrics to a separate subdirectory\n        self.val_log_dir = os.path.join(log_dir, 'validation')\n\n    def set_model(self, model):\n        # Setup writer for validation metrics\n        self.val_writer = tf.summary.FileWriter(self.val_log_dir)\n        super(TrainValTensorBoard, self).set_model(model)\n\n    def on_epoch_end(self, epoch, logs=None):\n        # Pop the validation logs and handle them separately with\n        # `self.val_writer`. Also rename the keys so that they can\n        # be plotted on the same figure with the training metrics\n        logs = logs or {}\n        val_logs = {k.replace('val_', ''): v for k, v in logs.items() if k.startswith('val_')}\n        for name, value in val_logs.items():\n            summary = tf.Summary()\n            summary_value = summary.value.add()\n            summary_value.simple_value = value.item()\n            summary_value.tag = name\n            self.val_writer.add_summary(summary, epoch)\n        self.val_writer.flush()\n\n        # Pass the remaining logs to `TensorBoard.on_epoch_end`\n        logs = {k: v for k, v in logs.items() if not k.startswith('val_')}\n        super(TrainValTensorBoard, self).on_epoch_end(epoch, logs)\n\n    def on_train_end(self, logs=None):\n        super(TrainValTensorBoard, self).on_train_end(logs)\n        self.val_writer.close()\n
\n\n\n\n

Using the MNIST dataset as an example:

\n\n
from keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.datasets import mnist\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train = x_train.reshape(60000, 784)\nx_test = x_test.reshape(10000, 784)\nx_train = x_train.astype('float32')\nx_test = x_test.astype('float32')\nx_train /= 255\nx_test /= 255\n\nmodel = Sequential()\nmodel.add(Dense(64, activation='relu', input_shape=(784,)))\nmodel.add(Dense(10, activation='softmax'))\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nmodel.fit(x_train, y_train, epochs=10,\n          validation_data=(x_test, y_test),\n          callbacks=[TrainValTensorBoard(write_graph=False)])\n
\n\n

You can then visualize the two curves on a same figure in TensorBoard.

\n\n

\"Screenshot\"

\n\n
\n\n

EDIT: I've modified the class a bit so that it can be used with eager execution.

\n\n

The biggest change is that I use tf.keras in the following code. It seems that the TensorBoard callback in standalone Keras does not support eager mode yet.

\n\n
import os\nimport tensorflow as tf\nfrom tensorflow.keras.callbacks import TensorBoard\nfrom tensorflow.python.eager import context\n\nclass TrainValTensorBoard(TensorBoard):\n    def __init__(self, log_dir='./logs', **kwargs):\n        self.val_log_dir = os.path.join(log_dir, 'validation')\n        training_log_dir = os.path.join(log_dir, 'training')\n        super(TrainValTensorBoard, self).__init__(training_log_dir, **kwargs)\n\n    def set_model(self, model):\n        if context.executing_eagerly():\n            self.val_writer = tf.contrib.summary.create_file_writer(self.val_log_dir)\n        else:\n            self.val_writer = tf.summary.FileWriter(self.val_log_dir)\n        super(TrainValTensorBoard, self).set_model(model)\n\n    def _write_custom_summaries(self, step, logs=None):\n        logs = logs or {}\n        val_logs = {k.replace('val_', ''): v for k, v in logs.items() if 'val_' in k}\n        if context.executing_eagerly():\n            with self.val_writer.as_default(), tf.contrib.summary.always_record_summaries():\n                for name, value in val_logs.items():\n                    tf.contrib.summary.scalar(name, value.item(), step=step)\n        else:\n            for name, value in val_logs.items():\n                summary = tf.Summary()\n                summary_value = summary.value.add()\n                summary_value.simple_value = value.item()\n                summary_value.tag = name\n                self.val_writer.add_summary(summary, step)\n        self.val_writer.flush()\n\n        logs = {k: v for k, v in logs.items() if not 'val_' in k}\n        super(TrainValTensorBoard, self)._write_custom_summaries(step, logs)\n\n    def on_train_end(self, logs=None):\n        super(TrainValTensorBoard, self).on_train_end(logs)\n        self.val_writer.close()\n
\n\n

The idea is the same --

\n\n\n\n

Again, you can use the MNIST data to test it,

\n\n
from tensorflow.keras.datasets import mnist\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.train import AdamOptimizer\n\ntf.enable_eager_execution()\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train = x_train.reshape(60000, 784)\nx_test = x_test.reshape(10000, 784)\nx_train = x_train.astype('float32')\nx_test = x_test.astype('float32')\nx_train /= 255\nx_test /= 255\ny_train = y_train.astype(int)\ny_test = y_test.astype(int)\n\nmodel = Sequential()\nmodel.add(Dense(64, activation='relu', input_shape=(784,)))\nmodel.add(Dense(10, activation='softmax'))\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer=AdamOptimizer(), metrics=['accuracy'])\n\nmodel.fit(x_train, y_train, epochs=10,\n          validation_data=(x_test, y_test),\n          callbacks=[TrainValTensorBoard(write_graph=False)])\n
\n", + "system": "" + }, + { + "instruction": "How to set weights in Keras with a numpy array?", + "input": "", + "output": "

What is keras_layer in your code?

\n\n

You can set weights these ways:

\n\n
model.layers[i].set_weights(listOfNumpyArrays)    \nmodel.get_layer(layerName).set_weights(...)\nmodel.set_weights(listOfNumpyArrays)\n
\n\n

Where model is an instance of an existing model. \nYou can see the expected length of the list and its array shapes using the method get_weights() from the same instances above.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: How to convert .meta, .data and .index model files into one graph.pb file", + "input": "", + "output": "

You can use this simple script to do that. But you must specify the names of the output nodes.

\n\n
import tensorflow as tf\n\nmeta_path = 'model.ckpt-22480.meta' # Your .meta file\noutput_node_names = ['output:0']    # Output nodes\n\nwith tf.Session() as sess:\n    # Restore the graph\n    saver = tf.train.import_meta_graph(meta_path)\n\n    # Load weights\n    saver.restore(sess,tf.train.latest_checkpoint('path/of/your/.meta/file'))\n\n    # Freeze the graph\n    frozen_graph_def = tf.graph_util.convert_variables_to_constants(\n        sess,\n        sess.graph_def,\n        output_node_names)\n\n    # Save the frozen graph\n    with open('output_graph.pb', 'wb') as f:\n      f.write(frozen_graph_def.SerializeToString())\n
\n\n

If you don't know the name of the output node or nodes, there are two ways

\n\n
    \n
  1. You can explore the graph and find the name with Netron or with console summarize_graph utility.

  2. \n
  3. You can use all the nodes as output ones as shown below.

  4. \n
\n\n
output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node]\n
\n\n

(Note that you have to put this line just before convert_variables_to_constants call.)

\n\n

But I think it's unusual situation, because if you don't know the output node, you cannot use the graph actually.

\n", + "system": "" + }, + { + "instruction": "Change images slider step in TensorBoard", + "input": "", + "output": "

I answered this question over there \"TensorBoard doesn't show all data points\", but this seems to be more popular so I will quote it here.

\n\n

You don't have to change the source code for this, there is a flag called --samples_per_plugin.

\n\n

Quoting from the help command

\n\n
\n

--samples_per_plugin: An optional comma separated list of plugin_name=num_samples pairs to explicitly\n specify how many samples to keep per tag for that plugin. For unspecified plugins, TensorBoard\n randomly downsamples logged summaries to reasonable values to prevent out-of-memory errors for long\n running jobs. This flag allows fine control over that downsampling. Note that 0 means keep all\n samples of that type. For instance, \"scalars=500,images=0\" keeps 500 scalars and all images. Most\n users should not need to set this flag.\n (default: '')

\n
\n\n

So if you want to have a slider of 100 images, use:

\n\n

tensorboard --samples_per_plugin images=100

\n", + "system": "" + }, + { + "instruction": "TensorBoard doesn't show all data points", + "input": "", + "output": "

You don't have to change the source code for this, there is a flag called --samples_per_plugin.

\n\n

Quoting from the help command

\n\n
\n

--samples_per_plugin: An optional comma separated list of plugin_name=num_samples pairs to explicitly\n specify how many samples to keep per tag for that plugin. For unspecified plugins, TensorBoard\n randomly downsamples logged summaries to reasonable values to prevent out-of-memory errors for long\n running jobs. This flag allows fine control over that downsampling. Note that 0 means keep all\n samples of that type. For instance, \"scalars=500,images=0\" keeps 500 scalars and all images. Most\n users should not need to set this flag.\n (default: '')

\n
\n\n

So if you want to have a slider of 100 images, use:

\n\n

tensorboard --samples_per_plugin images=100

\n", + "system": "" + }, + { + "instruction": "What does tf.gather_nd intuitively do?", + "input": "", + "output": "

Ok, so think about it like this:

\n\n

You are providing a list of index values to index the provided tensor to get those slices. The first dimension of the indices you provide is for each index you will perform. Let's pretend that tensor is just a list of lists.

\n\n

[[0]] means you want to get one specific slice(list) at index 0 in the provided tensor. Just like this:

\n\n
[tensor[0]]\n
\n\n

[[0], [1]] means you want get two specific slices at indices 0 and 1 like this:

\n\n
[tensor[0], tensor[1]]\n
\n\n

Now what if tensor is more than one dimensions? We do the same thing:

\n\n

[[0, 0]] means you want to get one slice at index [0,0] of the 0-th list. Like this:

\n\n
[tensor[0][0]]\n
\n\n

[[0, 1], [2, 3]] means you want return two slices at the indices and dimensions provided. Like this:

\n\n
[tensor[0][1], tensor[2][3]]\n
\n\n

I hope that makes sense. I tried using Python indexing to help explain how it would look in Python to do this to a list of lists.

\n", + "system": "" + }, + { + "instruction": "TensorFlow: questions regarding tf.argmax() and tf.equal()", + "input": "", + "output": "
tf.argmax(input, axis=None, name=None, dimension=None)\n
\n\n

Returns the index with the largest value across axis of a tensor.

\n\n

input is a Tensor and axis describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.

\n\n

For your specific case let's use two arrays and demonstrate this

\n\n
pred = np.array([[31, 23,  4, 24, 27, 34],\n                [18,  3, 25,  0,  6, 35],\n                [28, 14, 33, 22, 20,  8],\n                [13, 30, 21, 19,  7,  9],\n                [16,  1, 26, 32,  2, 29],\n                [17, 12,  5, 11, 10, 15]])\n\ny = np.array([[31, 23,  4, 24, 27, 34],\n                [18,  3, 25,  0,  6, 35],\n                [28, 14, 33, 22, 20,  8],\n                [13, 30, 21, 19,  7,  9],\n                [16,  1, 26, 32,  2, 29],\n                [17, 12,  5, 11, 10, 15]])\n
\n\n

Evaluating tf.argmax(pred, 1) gives a tensor whose evaluation will give array([5, 5, 2, 1, 3, 0])

\n\n

Evaluating tf.argmax(y, 1) gives a tensor whose evaluation will give array([5, 5, 2, 1, 3, 0])

\n\n
tf.equal(x, y, name=None) takes two tensors(x and y) as inputs and returns the truth value of (x == y) element-wise. \n
\n\n

Following our example, tf.equal(tf.argmax(pred, 1),tf.argmax(y, 1)) returns a tensor whose evaluation will givearray(1,1,1,1,1,1).

\n\n

correct_prediction is a tensor whose evaluation will give a 1-D array of 0's and 1's

\n\n

y_test_prediction can be obtained by executing pred = tf.argmax(logits, 1)

\n\n

The documentation for tf.argmax and tf.equal can be accessed by following the links below.

\n\n

tf.argmax() https://www.tensorflow.org/api_docs/python/math_ops/sequence_comparison_and_indexing#argmax

\n\n

tf.equal() https://www.tensorflow.org/versions/master/api_docs/python/control_flow_ops/comparison_operators#equal

\n", + "system": "" + }, + { + "instruction": "Is it possible to make a trainable variable not trainable?", + "input": "", + "output": "

After looking at the documentation and the code, I was not able to find a way to remove a Variable from the TRAINABLE_VARIABLES.

\n\n

Here is what happens:

\n\n\n\n

First solution

\n\n

When calling the minimize method of the optimizer (see doc.), you can pass a var_list=[...] as argument with the variables you want to optimizer.

\n\n

For instance, if you want to freeze all the layers of VGG except the last two, you can pass the weights of the last two layers in var_list.

\n\n

Second solution

\n\n

You can use a tf.train.Saver() to save variables and restore them later (see this tutorial).

\n\n\n\n

Optionally, you can decide to save only some of the variables in your checkpoint file. See the doc for more info.

\n", + "system": "" + }, + { + "instruction": "Does tensorflow use automatic or symbolic gradients?", + "input": "", + "output": "

TF uses automatic differentiation and more specifically reverse-mode auto differentiation.

\n\n
\n\n

There are 3 popular methods to calculate the derivative:

\n\n
    \n
  1. Numerical differentiation
  2. \n
  3. Symbolic differentiation
  4. \n
  5. Automatic differentiation
  6. \n
\n\n

Numerical differentiation relies on the definition of the derivative: \"enter, where you put a very small h and evaluate function in two places. This is the most basic formula and on practice people use other formulas which give smaller estimation error. This way of calculating a derivative is suitable mostly if you do not know your function and can only sample it. Also it requires a lot of computation for a high-dim function.

\n\n

Symbolic differentiation manipulates mathematical expressions. If you ever used matlab or mathematica, then you saw something like this\n\"enter

\n\n

Here for every math expression they know the derivative and use various rules (product rule, chain rule) to calculate the resulting derivative. Then they simplify the end expression to obtain the resulting expression.

\n\n

Automatic differentiation manipulates blocks of computer programs. A differentiator has the rules for taking the derivative of each element of a program (when you define any op in core TF, you need to register a gradient for this op). It also uses chain rule to break complex expressions into simpler ones. Here is a good example how it works in real TF programs with some explanation.

\n\n
\n\n

You might think that Automatic differentiation is the same as Symbolic differentiation (in one place they operate on math expression, in another on computer programs). And yes, they are sometimes very similar. But for control flow statements (`if, while, loops) the results can be very different:

\n\n
\n

symbolic differentiation leads to inefficient code (unless carefully\n done) and faces the difficulty of converting a computer program into a\n single expression

\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow operator overloading", + "input": "", + "output": "

If at least one of x or y is a tf.Tensor object, the expressions tf.add(x, y) and x + y are equivalent. The main reason you might use tf.add() is to specify an explicit name keyword argument for the created op, which is not possible with the overloaded operator version.

\n\n

Note that if neither x nor y is a tf.Tensor—for example if they are NumPy arrays—then x + y will not create a TensorFlow op. tf.add() always creates a TensorFlow op and converts its arguments to tf.Tensor objects. Therefore, if you are writing a library function that might accept both tensors and NumPy arrays, you might prefer to use tf.add().

\n\n

The following operators are overloaded in the TensorFlow Python API:

\n\n\n\n

Please note, __eq__ ( binary == ) is not overloaded. x == y will simply return a Python boolean whether x and y refer to the same tensor. You need to use tf.equal() explicitly to check for element-wise equality. Same goes for not equal, __ne__ ( binary != ).

\n", + "system": "" + }, + { + "instruction": "TensorFlow: Max of a tensor along an axis", + "input": "", + "output": "

The tf.reduce_max() operator provides exactly this functionality. By default it computes the global maximum of the given tensor, but you can specify a list of reduction_indices, which has the same meaning as axis in NumPy. To complete your example:

\n\n
x = tf.constant([[1, 220, 55], [4, 3, -1]])\nx_max = tf.reduce_max(x, reduction_indices=[1])\nprint sess.run(x_max)  # ==> \"array([220,   4], dtype=int32)\"\n
\n\n

If you compute the argmax using tf.argmax(), you could obtain the the values from a different tensor y by flattening y using tf.reshape(), converting the argmax indices into vector indices as follows, and using tf.gather() to extract the appropriate values:

\n\n
ind_max = tf.argmax(x, dimension=1)\ny = tf.constant([[1, 2, 3], [6, 5, 4]])\n\nflat_y = tf.reshape(y, [-1])  # Reshape to a vector.\n\n# N.B. Handles 2-D case only.\nflat_ind_max = ind_max + tf.cast(tf.range(tf.shape(y)[0]) * tf.shape(y)[1], tf.int64)\n\ny_ = tf.gather(flat_y, flat_ind_max)\n\nprint sess.run(y_) # ==> \"array([2, 6], dtype=int32)\"\n
\n", + "system": "" + }, + { + "instruction": "Visualizing output of convolutional layer in tensorflow", + "input": "", + "output": "

I don't know of a helper function but if you want to see all the filters you can pack them into one image with some fancy uses of tf.transpose.

\n\n

So if you have a tensor that's images x ix x iy x channels

\n\n
>>> V = tf.Variable()\n>>> print V.get_shape()\n\nTensorShape([Dimension(-1), Dimension(256), Dimension(256), Dimension(32)])\n
\n\n

So in this example ix = 256, iy=256, channels=32

\n\n

first slice off 1 image, and remove the image dimension

\n\n
V = tf.slice(V,(0,0,0,0),(1,-1,-1,-1)) #V[0,...]\nV = tf.reshape(V,(iy,ix,channels))\n
\n\n

Next add a couple of pixels of zero padding around the image

\n\n
ix += 4\niy += 4\nV = tf.image.resize_image_with_crop_or_pad(image, iy, ix)\n
\n\n

Then reshape so that instead of 32 channels you have 4x8 channels, lets call them cy=4 and cx=8.

\n\n
V = tf.reshape(V,(iy,ix,cy,cx)) \n
\n\n

Now the tricky part. tf seems to return results in C-order, numpy's default.

\n\n

The current order, if flattened, would list all the channels for the first pixel (iterating over cx and cy), before listing the channels of the second pixel (incrementing ix). Going across the rows of pixels (ix) before incrementing to the next row (iy).

\n\n

We want the order that would lay out the images in a grid.\nSo you go across a row of an image (ix), before stepping along the row of channels (cx), when you hit the end of the row of channels you step to the next row in the image (iy) and when you run out or rows in the image you increment to the next row of channels (cy). so:

\n\n
V = tf.transpose(V,(2,0,3,1)) #cy,iy,cx,ix\n
\n\n

Personally I prefer np.einsum for fancy transposes, for readability, but it's not in tf yet.

\n\n
newtensor = np.einsum('yxYX->YyXx',oldtensor)\n
\n\n

anyway, now that the pixels are in the right order, we can safely flatten it into a 2d tensor:

\n\n
# image_summary needs 4d input\nV = tf.reshape(V,(1,cy*iy,cx*ix,1))\n
\n\n

try tf.image_summary on that, you should get a grid of little images.

\n\n

Below is an image of what one gets after following all the steps here.

\n\n

\"enter

\n", + "system": "" + }, + { + "instruction": "Tensorflow: How to replace a node in a calculation graph?", + "input": "", + "output": "

TL;DR: If you can define the two computations as Python functions, you should do that. If you can't, there's more advanced functionality in TensorFlow to serialize and import graphs, which allows you to compose graphs from different sources.

\n\n

One way to do this in TensorFlow is to build the disjoint computations as separate tf.Graph objects, then convert them to serialized protocol buffers using Graph.as_graph_def():

\n\n
with tf.Graph().as_default() as g_1:\n  input = tf.placeholder(tf.float32, name=\"input\")\n  y = f(input)\n  # NOTE: using identity to get a known name for the output tensor.\n  output = tf.identity(y, name=\"output\")\n\ngdef_1 = g_1.as_graph_def()\n\nwith tf.Graph().as_default() as g_2:  # NOTE: g_2 not g_1       \n  input = tf.placeholder(tf.float32, name=\"input\")\n  z = g(input)\n  output = tf.identity(y, name=\"output\")\n\ngdef_2 = g_2.as_graph_def()\n
\n\n

Then you could compose gdef_1 and gdef_2 into a third graph, using tf.import_graph_def():

\n\n
with tf.Graph().as_default() as g_combined:\n  x = tf.placeholder(tf.float32, name=\"\")\n\n  # Import gdef_1, which performs f(x).\n  # \"input:0\" and \"output:0\" are the names of tensors in gdef_1.\n  y, = tf.import_graph_def(gdef_1, input_map={\"input:0\": x},\n                           return_elements=[\"output:0\"])\n\n  # Import gdef_2, which performs g(y)\n  z, = tf.import_graph_def(gdef_2, input_map={\"input:0\": y},\n                           return_elements=[\"output:0\"]\n
\n", + "system": "" + }, + { + "instruction": "pip installation error "No such file or directory: setup.py"", + "input": "", + "output": "

from https://github.com/tensorflow/tensorflow/issues/56

\n\n
\n

The command to type is \"pip install --upgrade pip\", and this should be\n added to the instructions right after where they tell the user to\n \"source bin/activate\"

\n
\n", + "system": "" + }, + { + "instruction": "Keras difference between generator and sequence", + "input": "", + "output": "

Those methods are roughly the same. It is correct to subclass\nSequence when your dataset doesn't fit in memory. But you shouldn't\nrun any preprocessing in any of the class' methods because that will\nbe reexecuted once per epoch wasting lots of computing resources.

\n\n

It is probably also easier to shuffle the samples rather than their\nindices. Like this:

\n\n

from random import shuffle

\n\n
class DataGen(Sequence):\n    def __init__(self, batch_size, preproc, type, x_set, y_set):\n        self.samples = list(zip(x, y))\n        self.batch_size = batch_size\n        shuffle(self.samples)\n        self.type = type\n        self.preproc = preproc\n\n    def __len__(self):\n        return int(np.ceil(len(self.samples) / self.batch_size))\n\n    def __getitem__(self, i):\n        batch = self.samples[i * self.batch_size:(i + 1) * self.batch_size]\n        return self.preproc.process(*zip(batch))\n\n    def on_epoch_end(self):\n        shuffle(self.samples)\n
\n\n

I think it is impossible to say why you run out of memory without\nknowing more about your data. My guess would be that your preproc\nfunction is doing something wrong. You can debug it by running:

\n\n
for e in DataGen(batch_size, preproc, *train):\n    print(e)\nfor e in DataGen(batch_size, preproc, *dev):\n    print(e)\n
\n\n

You will most likely run out of memory.

\n", + "system": "" + }, + { + "instruction": "Unknown initializer: GlorotUniform when loading Keras model", + "input": "", + "output": "

I ran into the same issue. After changing:

\n\n

from tensorflow import keras

\n\n

to:

\n\n

import keras

\n\n

life is once again worth living.

\n", + "system": "" + }, + { + "instruction": "'Tensor' object has no attribute 'lower'", + "input": "", + "output": "

The tensor must be passed to the layer when you are calling it, and not as an argument. Therefore it must be like this:

\n\n
x = Flatten()(x)  # first the layer is constructed and then it is called on x\n
\n\n

To make it more clear, it is equivalent to this:

\n\n
flatten_layer = Flatten()  # instantiate the layer\nx = flatten_layer(x)       # call it on the given tensor\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow Keras Copy Weights From One Model to Another", + "input": "", + "output": "

Actually what you've done is much more than simply copying weights. You made these two models identical all the time. Every time you update one model - the second one is also updated - as both models have the same weights variables.

\n\n

If you want to just copy weights - the simplest way is by this command:

\n\n
target_model.set_weights(model.get_weights()) \n
\n", + "system": "" + }, + { + "instruction": "How to disable dropout while prediction in keras?", + "input": "", + "output": "

Keras does this by default. In Keras dropout is disabled in test mode. You can look at the code here and see that they use the dropped input in training and the actual input while testing.

\n\n

As far as I know you have to build your own training function from the layers and specify the training flag to predict with dropout (e.g. its not possible to specify a training flag for the predict functions). This is a problem in case you want to do GANs, which use the intermediate output for training and also train the network as a whole, due to a divergence between generated training images and generated test images.

\n", + "system": "" + }, + { + "instruction": "Tensorflow : What is the relationship between .ckpt file and .ckpt.meta and .ckpt.index , and .pb file", + "input": "", + "output": "\n\n

There are a lot of questions here about how to save and restore a graph. See the answer here for instance, but be careful that the two cited tutorials, though really helpful, are far from perfect, and a lot of people still seem to struggle to import a model in c++.

\n\n

EDIT:\nit looks like you can also use the .ckpt files in c++ now, so I guess you don't necessarily need the .pb file any more.

\n", + "system": "" + }, + { + "instruction": "What is the difference between Luong attention and Bahdanau attention?", + "input": "", + "output": "

I went through this Effective Approaches to Attention-based Neural Machine Translation. In the section 3.1 They have mentioned the difference between two attentions as follows,

\n
    \n
  1. Luong attention used top hidden layer states in both of encoder and decoder. But Bahdanau attention take concatenation of forward and backward source hidden state (Top Hidden Layer).

    \n
  2. \n
  3. In Luong attention they get the decoder hidden state at time t. Then calculate attention scores and from that get the context vector which will be concatenated with hidden state of the decoder and then predict.

    \n

    But in the Bahdanau at time t we consider about t-1 hidden state of the decoder. Then we calculate alignment , context vectors as above. But then we concatenate this context with hidden state of the decoder at t-1. So before the softmax this concatenated vector goes inside a GRU.

    \n
  4. \n
  5. Luong has diffferent types of alignments. Bahdanau has only concat score alignment model.

    \n
  6. \n
\n

\"Alignment

\n", + "system": "" + }, + { + "instruction": "What's the difference between Tensor and Variable in Tensorflow", + "input": "", + "output": "\n

Variable is basically a wrapper on Tensor that maintains state across multiple calls to run, and I think makes some things easier with saving and restoring graphs. A Variable needs to be initialized before you can run it. You provide an initial value when you define the Variable, but you have to call its initializer function in order to actually assign this value in your session and then use the Variable. A common way to do this is with tf.global_variables_initalizer().

\n

For example:

\n
import tensorflow as tf\ntest_var = tf.Variable([111, 11, 1])\nsess = tf.Session()\nsess.run(test_var)\n\n# Error!\n\nsess.run(tf.global_variables_initializer())  # initialize variables\nsess.run(test_var)\n# array([111, 11, 1], dtype=int32)\n
\n

As for why you use Variables instead of Tensors, basically a Variable is a Tensor with additional capability and utility. You can specify a Variable as trainable (the default, actually), meaning that your optimizer will adjust it in an effort to minimize your cost function; you can specify where the Variable resides on a distributed system; you can easily save and restore Variables and graphs. Some more information on how to use Variables can be found here.

\n", + "system": "" + }, + { + "instruction": "What is the difference between Keras and tf.keras in TensorFlow 1.1+?", + "input": "", + "output": "

tf.keras (formerly tf.contrib.keras) is an implementation of keras 2 implemented exclusively with/for tensorflow. It is hosted on the tensorflow repo and has a distinct code base than the official repo (the last commit there in the tf-keras branch dates back from May 2017).

\n\n

As a rule of thumb, if your code use any tensorflow-specific code, say anything in tf.data.* for providing inputs or tf.summary.* for visualization in tensorboard, it is simpler to just use tf.keras. (Some may even recommend not using the reference Keras implementation with TF because of occasional problems it has with this toolkit).

\n\n

On the other hand, if you plan to actively maintain a framework-agnostic code, using keras' own package is your only choice.

\n\n

If you don't care much about being framework-agnostic but don't use tensorflow-specific code, I would probably advise to go with tf.keras and start using tensorflow-specific code, esp. tf.data which is a game-changer in my opinion.

\n\n

EDIT

\n\n

I attended a talk by Chollet on TF2 (couldn't find a recording online) in which he basically said that support for frameworks other than TF would eventually drop and future developments of Keras would happen exclusively in tf.keras.

\n\n

From what I can see, this is already happening, as Keras' commit stream is getting thin these days.

\n\n

It makes a lot of sense since, as of now, the only other popular DL framework is pytorch, which is not supported by Keras. Keeping Keras code \"agnostic\" to tensorflow -- the only major framework it is supporting -- makes less and less sense.

\n\n

So today, my answer would be to use tf.keras by default, and keep Keras for legacy projects that would be hard to migrate -- that is the future-proof choice for Keras.

\n", + "system": "" + }, + { + "instruction": "TensorFlow wasn't compiled to use SSE (etc.) instructions, but these are available", + "input": "", + "output": "

Those are warnings (as indicated by the W after the colon. Errors have an E there).

\n

The warnings refer to the fact that your CPU supports SSE Instructions, which allow some fast in-hardware-parallel operations. Enabling these operations is a compile-time operation (i.e. to use SSE you need to build the library from the source enabling the specific SSE version you're targeting), in which case you might take a look at this question.

\n

Note, however, that SSE support influences only the computation speed. Tensorflow will work with or without SSE, but it might take longer for your code to run.\nNote, also, that this influences only the CPU. If you're using the GPU build of Tensorflow, all the operations run on the GPU will not benefit of SSE instructions.

\n", + "system": "" + }, + { + "instruction": "Why is my GPU slower than CPU when training LSTM/RNN models?", + "input": "", + "output": "

If you use Keras, use CuDNNLSTM in place of LSTM or CuDNNGRU in place of GRU. In my case (2 Tesla M60), I am seeing 10x boost of performance. By the way I am using batch size 128 as suggested by @Alexey Golyshev.

\n", + "system": "" + }, + { + "instruction": "ValueError: No gradients provided for any variable", + "input": "", + "output": "

This problem is caused by the following line: tf.nn.softmax_cross_entropy_with_logits(labels=activation, logits=Y)

\n

Based on documentation you should have

\n
\n

labels: Each row labels[i] must be a valid probability distribution.

\n

logits: Unscaled log probabilities.

\n
\n

So logits suppose to be your hypothesis and thus equal to activation and valid probability distribution is Y. So just change it with tf.nn.softmax_cross_entropy_with_logits(labels=Y, logits=activation)

\n", + "system": "" + }, + { + "instruction": "How to use tf.while_loop() in tensorflow", + "input": "", + "output": "

What is stopping you from adding more functionality to the body? You can build whatever complex computational graph you like in the body and take whatever inputs you like from the enclosing graph. Also, outside of the loop, you can then do whatever you want with whatever outputs you return. As you can see from the amount of 'whatevers', TensorFlow's control flow primitives were built with much generality in mind. Below is another 'simple' example, in case it helps.

\n\n
import tensorflow as tf\nimport numpy as np\n\ndef body(x):\n    a = tf.random_uniform(shape=[2, 2], dtype=tf.int32, maxval=100)\n    b = tf.constant(np.array([[1, 2], [3, 4]]), dtype=tf.int32)\n    c = a + b\n    return tf.nn.relu(x + c)\n\ndef condition(x):\n    return tf.reduce_sum(x) < 100\n\nx = tf.Variable(tf.constant(0, shape=[2, 2]))\n\nwith tf.Session():\n    tf.global_variables_initializer().run()\n    result = tf.while_loop(condition, body, [x])\n    print(result.eval())\n
\n", + "system": "" + }, + { + "instruction": "Installing tensorflow with anaconda in windows", + "input": "", + "output": "

Google has recently launched a newer version of TensorFlow r0.12 which include support of Windows both CPU and GPU version can now be installed using Python >=3.5.2 (only 64-bit) version.

\n

For CPU only version open command prompt and enter follow command

\n
pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl\n
\n

Follow this TensorFlow on Windows for step-by-step instructions.

\n

UPDATE

\n

To install current latest version please run following command:

\n
pip install tensorflow #CPU only\npip install tensorflow-gpu #For GPU support\n
\n

UPDATE 2020

\n

TensorFlow 2.0 now has a single package for both CPU and GPU version, simply run

\n
pip install tensorflow\n
\n

If you're using Anaconda you can install TensorFlow GPU version and all of its dependencies (CUDA, cuDNN) by running:

\n
conda install -c tensorflow-gpu\n
\n", + "system": "" + }, + { + "instruction": "How to convert tf.int64 to tf.float32?", + "input": "", + "output": "

You can cast generally using:

\n\n
tf.cast(my_tensor, tf.float32)\n
\n\n

Replace tf.float32 with your desired type.

\n\n
\n\n

Edit: It seems at the moment at least, that tf.cast won't cast to an unsigned dtype (e.g. tf.uint8). To work around this, you can cast to the signed equivalent and used tf.bitcast to get all the way. e.g.

\n\n
tf.bitcast(tf.cast(my_tensor, tf.int8), tf.uint8)\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow Dictionary lookup with String tensor", + "input": "", + "output": "

If you want to run this with new TF 2.x code with eager execution enabled by default. Below is the quick code snippet.

\n\n
import tensorflow as tf\n\n# build a lookup table\ntable = tf.lookup.StaticHashTable(\n    initializer=tf.lookup.KeyValueTensorInitializer(\n        keys=tf.constant([0, 1, 2, 3]),\n        values=tf.constant([10, 11, 12, 13]),\n    ),\n    default_value=tf.constant(-1),\n    name=\"class_weight\"\n)\n\n# now let us do a lookup\ninput_tensor = tf.constant([0, 0, 1, 1, 2, 2, 3, 3])\nout = table.lookup(input_tensor)\nprint(out)\n
\n\n

Output:

\n\n
tf.Tensor([10 10 11 11 12 12 13 13], shape=(8,), dtype=int32)\n
\n", + "system": "" + }, + { + "instruction": "InvalidArgumentError: cannot compute MatMul as input #0(zero-based) was expected to be a float tensor but is a double tensor [Op:MatMul]", + "input": "", + "output": "

Part 1: The problem is indeed the datatype of your input. By default your keras model expects float32 but you are passing a float64. You can either change the dtype of the model or change the input to float32.

\n\n

To change your model:

\n\n
def make_model():\n    net = tf.keras.Sequential()\n    net.add(tf.keras.layers.Dense(4, activation='relu', dtype='float32'))\n    net.add(tf.keras.layers.Dense(4, activation='relu'))\n    net.add(tf.keras.layers.Dense(1))\n    return net\n
\n\n

To change your input:\ny = y.astype('float32')

\n\n

Part 2: You need to call the function that computes your model (i.e. model(data)) under tf.GradientTape(). For example, you can replace your compute_loss method with the following:

\n\n
def compute_loss(model, x, y):\n    pred = model(x)\n    return tf.reduce_mean(tf.square(tf.subtract(pred, y)))\n
\n", + "system": "" + }, + { + "instruction": "How can I clear a model created with Keras and Tensorflow(as backend)?", + "input": "", + "output": "

keras.backend.clear_session() should clear the previous model. From https://keras.io/backend/:

\n\n
\n

Destroys the current TF graph and creates a new one.\n Useful to avoid clutter from old models / layers.

\n
\n", + "system": "" + }, + { + "instruction": "keras vs. tensorflow.python.keras - which one to use?", + "input": "", + "output": "

tensorflow.python.keras is just a bundle of keras with a single backend inside tensorflow package. This allows you to start using keras by installing just pip install tensorflow.

\n\n

keras package contains full keras library with three supported backends: tensorflow, theano and CNTK. If you even wish to switch between backends, you should choose keras package. This approach is also more flexible because it allows to install keras updates independently from tensorflow (which may not be easy to update, for example, because the next version may require a different version of CUDA driver) or vice versa. For this reason, I prefer to install keras as another package.

\n\n

In terms of API, there is no difference right now, but keras will probably be integrated more tightly into tensorflow in the future. So there is a chance there will be tensorflow-only features in keras, but even in this case it's not a blocker to use keras package.

\n\n

UPDATE

\n\n

As of Keras 2.3.0 release, Francois Chollet announced that users should switch towards tf.keras instead of plain Keras. Therefore, the change to tf.keras instead of keras should be made by all users.

\n", + "system": "" + }, + { + "instruction": "How to pip install old version of library(tensorflow)?", + "input": "", + "output": "

This works for me on Mac OS 10.13.1.

\n\n
pip install --user install tensorflow==1.3.0. \n
\n", + "system": "" + }, + { + "instruction": "Efficient element-wise multiplication of a matrix and a vector in TensorFlow", + "input": "", + "output": "

The simplest code to do this relies on the broadcasting behavior of tf.multiply()*, which is based on numpy's broadcasting behavior:

\n\n
x = tf.constant(5.0, shape=[5, 6])\nw = tf.constant([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])\nxw = tf.multiply(x, w)\nmax_in_rows = tf.reduce_max(xw, 1)\n\nsess = tf.Session()\nprint sess.run(xw)\n# ==> [[0.0, 5.0, 10.0, 15.0, 20.0, 25.0],\n#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0],\n#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0],\n#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0],\n#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0]]\n\nprint sess.run(max_in_rows)\n# ==> [25.0, 25.0, 25.0, 25.0, 25.0]\n
\n\n

* In older versions of TensorFlow, tf.multiply() was called tf.mul(). You can also use the * operator (i.e. xw = x * w) to perform the same operation.

\n", + "system": "" + }, + { + "instruction": "What is the best way to implement weight constraints in TensorFlow?", + "input": "", + "output": "

You can take the Lagrangian approach and simply add a penalty for features of the variable you don't want.

\n\n

e.g. To encourage theta to be non-negative, you could add the following to the optimizer's objective function.

\n\n
    added_loss = -tf.minimum( tf.reduce_min(theta),0)\n
\n\n

If any theta are negative, then add2loss will be positive, otherwise zero. Scaling that to a meaningful value is left as an exercise to the reader. Scaling too little will not exert enough pressure. Too much may make things unstable.

\n", + "system": "" + }, + { + "instruction": "How can I clear a model created with Keras and Tensorflow(as backend)?", + "input": "", + "output": "

keras.backend.clear_session() should clear the previous model. From https://keras.io/backend/:

\n\n
\n

Destroys the current TF graph and creates a new one.\n Useful to avoid clutter from old models / layers.

\n
\n", + "system": "" + }, + { + "instruction": "Custom TensorFlow Keras optimizer", + "input": "", + "output": "

Update: TF2.2 forced me to clean up all implementations - so now they can be used as a reference for TF best practices. Also added a section below on _get_hyper vs. _set_hyper.

\n\n
\n\n

I've implemented Keras AdamW in all major TF & Keras versions - I invite you to examine optimizers_v2.py. Several points:

\n\n\n\n
\n\n\n\n

Latter deals with sparse layers - e.g. Embedding - and former with everything else; example.

\n\n\n\n

When defining trainable tf.Variables; example: weights' first and second order moments (e.g. Adam). It uses add_slot().

\n\n
\n\n

_get_hyper vs. _set_hyper: they enable setting and getting Python literals (int, str, etc), callables, and tensors. They exist largely for convenience: anything set via _set_hyper can be retrieved via _get_hyper, avoiding repeating boilerplate code. I dedicated a Q&A to it here.

\n", + "system": "" + }, + { + "instruction": "How does the Flatten layer work in Keras?", + "input": "", + "output": "

The Flatten() operator unrolls the values beginning at the last dimension (at least for Theano, which is \"channels first\", not \"channels last\" like TF. I can't run TensorFlow in my environment). This is equivalent to numpy.reshape with 'C' ordering:

\n\n
\n

\u2018C\u2019 means to read / write the elements using C-like index order, with\n the last axis index changing fastest, back to the first axis index\n changing slowest.

\n
\n\n

Here is a standalone example illustrating Flatten operator with the Keras Functional API. You should be able to easily adapt for your environment.

\n\n
import numpy as np\nfrom keras.layers import Input, Flatten\nfrom keras.models import Model\ninputs = Input(shape=(3,2,4))\n\n# Define a model consisting only of the Flatten operation\nprediction = Flatten()(inputs)\nmodel = Model(inputs=inputs, outputs=prediction)\n\nX = np.arange(0,24).reshape(1,3,2,4)\nprint(X)\n#[[[[ 0  1  2  3]\n#   [ 4  5  6  7]]\n#\n#  [[ 8  9 10 11]\n#   [12 13 14 15]]\n#\n#  [[16 17 18 19]\n#   [20 21 22 23]]]]\nmodel.predict(X)\n#array([[  0.,   1.,   2.,   3.,   4.,   5.,   6.,   7.,   8.,   9.,  10.,\n#         11.,  12.,  13.,  14.,  15.,  16.,  17.,  18.,  19.,  20.,  21.,\n#         22.,  23.]], dtype=float32)\n
\n", + "system": "" + }, + { + "instruction": "Can I export a tensorflow summary to CSV?", + "input": "", + "output": "

While the answer here is as requested within tensorboard it only allows to download a csv for a single run of a single tag.\nIf you have for example 10 tags and 20 runs (what is not at all much) you would need to do the above step 200 times (that alone will probably take you more than a hour).\nIf now you for some reason would like to actually do something with the data for all runs for a single tag you would need to write some weird CSV accumulation script or copy everything by hand (what will probably cost you more than a day).

\n\n

Therefore I would like to add a solution that extracts a CSV file for every tag with all runs contained. Column headers are the run path names and row indices are the run step numbers.

\n\n
import os\nimport numpy as np\nimport pandas as pd\n\nfrom collections import defaultdict\nfrom tensorboard.backend.event_processing.event_accumulator import EventAccumulator\n\n\ndef tabulate_events(dpath):\n    summary_iterators = [EventAccumulator(os.path.join(dpath, dname)).Reload() for dname in os.listdir(dpath)]\n\n    tags = summary_iterators[0].Tags()['scalars']\n\n    for it in summary_iterators:\n        assert it.Tags()['scalars'] == tags\n\n    out = defaultdict(list)\n    steps = []\n\n    for tag in tags:\n        steps = [e.step for e in summary_iterators[0].Scalars(tag)]\n\n        for events in zip(*[acc.Scalars(tag) for acc in summary_iterators]):\n            assert len(set(e.step for e in events)) == 1\n\n            out[tag].append([e.value for e in events])\n\n    return out, steps\n\n\ndef to_csv(dpath):\n    dirs = os.listdir(dpath)\n\n    d, steps = tabulate_events(dpath)\n    tags, values = zip(*d.items())\n    np_values = np.array(values)\n\n    for index, tag in enumerate(tags):\n        df = pd.DataFrame(np_values[index], index=steps, columns=dirs)\n        df.to_csv(get_file_path(dpath, tag))\n\n\ndef get_file_path(dpath, tag):\n    file_name = tag.replace(\"/\", \"_\") + '.csv'\n    folder_path = os.path.join(dpath, 'csv')\n    if not os.path.exists(folder_path):\n        os.makedirs(folder_path)\n    return os.path.join(folder_path, file_name)\n\n\nif __name__ == '__main__':\n    path = \"path_to_your_summaries\"\n    to_csv(path)\n
\n\n

My solution builds upon: https://stackoverflow.com/a/48774926/2230045

\n\n
\n\n

EDIT:

\n\n

I created a more sophisticated version and released it on GitHub: https://github.com/Spenhouet/tensorboard-aggregator

\n\n

This version aggregates multiple tensorboard runs and is able to save the aggregates to a new tensorboard summary or as a .csv file.

\n", + "system": "" + }, + { + "instruction": "How to do slice assignment in Tensorflow", + "input": "", + "output": "

Currently, you can do slice assignment for variables in TensorFlow. There is no specific named function for it, but you can select a slice and call assign on it:

\n\n
my_var = my_var[4:8].assign(tf.zeros(4))\n
\n\n

First, note that (after having looked at the documentation) it seems that the return value of assign, even when applied to a slice, is always a reference to the whole variable after applying the update.

\n\n

EDIT: The information below is either deprecated, imprecise or was always wrong. The fact is that the returned value of assign is a tensor that can be readily used and already incorporates the dependency to the assignment, so simply evaluating that or using it in further operations will ensure it gets executed without need for an explicit tf.control_dependencies block.

\n\n
\n\n

Note, also, that this will only add the assignment op to the graph, but will not run it unless it is explicitly executed or set as a dependency of some other operation. A good practice is to use it in a tf.control_dependencies context:

\n\n
with tf.control_dependencies([my_var[4:8].assign(tf.zeros(4))]):\n    my_var = tf.identity(my_var)\n
\n\n

You can read more about it in TensorFlow issue #4638.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: When to use tf.expand_dims?", + "input": "", + "output": "

expand_dims will not add or reduce elements in a tensor, it just changes the shape by adding 1 to dimensions. For example, a vector with 10 elements could be treated as a 10x1 matrix.

\n\n

The situation I have met to use expand_dims is when I tried to build a ConvNet to classify grayscale images. The grayscale images will be loaded as matrix of size [320, 320]. However, tf.nn.conv2d require input to be [batch, in_height, in_width, in_channels], where the in_channels dimension is missing in my data which in this case should be 1. So I used expand_dims to add one more dimension.

\n\n

In your case, I do not think you need expand_dims.

\n", + "system": "" + }, + { + "instruction": "Trouble with TensorFlow in Jupyter Notebook", + "input": "", + "output": "

Update

\n\n

TensorFlow website supports five installations.

\n\n

To my understanding, using Pip installation directly would be fine to import TensorFlow in Jupyter Notebook (as long as Jupyter Notebook was installed and there were no other issues) b/z it didn't create any virtual environments.

\n\n

Using virtualenv install and conda install would need to install jupyter into the newly created TensorFlow environment to allow TensorFlow to work in Jupyter Notebook (see the following original post section for more details).

\n\n

I believe docker install may require some port setup in the VirtualBox to make TensorFlow work in Jupyter Notebook (see this post).

\n\n

For installing from sources, it also depends on which environment the source code is built and installed into. If it's installed into a freshly created virtual environment or an virtual environment which didn't have Jupyter Notebook installed, it would also need to install Jupyter Notebook into the virtual environment to use Tensorflow in Jupyter Notebook.

\n\n

Original Post

\n\n

To use tensorflow in Ipython and/or Jupyter(Ipython) Notebook, you'll need to install Ipython and Jupyter (after installing tensorflow) under the tensorflow activated environment.

\n\n

Before install Ipython and Jupyter under tensorflow environment, if you do the following commands in terminal:

\n\n
username$ source activate tensorflow\n\n(tensorflow)username$ which ipython\n(tensorflow)username$ /Users/username/anaconda/bin/ipython\n\n(tensorflow)username$ which jupyter\n(tensorflow)username$ /Users/username/anaconda/bin/jupyter\n\n(tensorflow)username$ which python\n(tensorflow)username$ /User/username//anaconda/envs/tensorflow/bin/python\n
\n\n

This is telling you that when you open python from terminal, it is using the one installed in the \"environments\" where tensorflow is installed. Therefore you can actually import tensorflow successfully. However, if you are trying to run ipython and/or jupyter notebook, these are not installed under the \"environments\" equipped with tensorflow, hence it has to go back to use the regular environment which has no tensorflow module, hence you get an import error.

\n\n

You can verify this by listing out the items under envs/tensorflow/bin directory:

\n\n
(tensorflow) username$ ls /User/username/anaconda/envs/tensorflow/bin/\n
\n\n

You will see that there are no \"ipython\" and/or \"jupyer\" listing out.

\n\n

To use tensorflow with Ipython and/or Jupyter notebook, simply install them into the tensorflow environment:

\n\n
(tensorflow) username$ conda install ipython\n(tensorflow) username$ pip install jupyter #(use pip3 for python3)\n
\n\n

After installing them, there should be a \"jupyer\" and a \"ipython\" show up in the envs/tensorflow/bin/ directory.

\n\n

Notes:\nBefore trying to import tensorflow module in jupyter notebook, try close the notebook. And \"source deactivate tensorflow\" first, and then reactivate it (\"source activate tensorflow\") to make sure things are \"on the same page\". Then reopen the notebook and try import tensorflow. It should be import successfully (worked on mine at least).

\n", + "system": "" + }, + { + "instruction": "ImportError: cannot import name 'set_random_seed' from 'tensorflow' (C:\\Users\\polon\\Anaconda3\\lib\\site-packages\\tensorflow\\__init__.py)", + "input": "", + "output": "

In Tensoflow2 there is no need to perform

\n\n
from tensorflow import set_random_seed\n
\n\n

in order to run

\n\n
set_random_seed(x)\n
\n\n

(as it was in older version)

\n\n

Only have to run

\n\n
import tensorflow\ntensorflow.random.set_seed(x)\n
\n\n

Thanks to @David Buck

\n", + "system": "" + }, + { + "instruction": "how to get string value out of tf.tensor which dtype is string", + "input": "", + "output": "

You can use tf.py_func to wrap load_audio_file().

\n\n
import tensorflow as tf\n\ntf.enable_eager_execution()\n\ndef load_audio_file(file_path):\n    # you should decode bytes type to string type\n    print(\"file_path: \",bytes.decode(file_path),type(bytes.decode(file_path)))\n    return file_path\n\ntrain_dataset = tf.data.Dataset.list_files('clean_4s_val/*.wav')\ntrain_dataset = train_dataset.map(lambda x: tf.py_func(load_audio_file, [x], [tf.string]))\n\nfor one_element in train_dataset:\n    print(one_element)\n\nfile_path:  clean_4s_val/1.wav <class 'str'>\n(<tf.Tensor: id=32, shape=(), dtype=string, numpy=b'clean_4s_val/1.wav'>,)\nfile_path:  clean_4s_val/3.wav <class 'str'>\n(<tf.Tensor: id=34, shape=(), dtype=string, numpy=b'clean_4s_val/3.wav'>,)\nfile_path:  clean_4s_val/2.wav <class 'str'>\n(<tf.Tensor: id=36, shape=(), dtype=string, numpy=b'clean_4s_val/2.wav'>,)\n
\n\n

UPDATE for TF 2

\n\n

The above solution will not work with TF 2 (tested with 2.2.0), even when replacing tf.py_func with tf.py_function, giving

\n\n
InvalidArgumentError: TypeError: descriptor 'decode' requires a 'bytes' object but received a 'tensorflow.python.framework.ops.EagerTensor'\n
\n\n

To make it work in TF 2, make the following changes:

\n\n\n", + "system": "" + }, + { + "instruction": "tensorflow Mac OS gpu support", + "input": "", + "output": "

I wrote a little tutorial on compiling TensorFlow 1.2 with GPU support on macOS. I think it's customary to copy relevant parts to SO, so here it goes:

\n\n
    \n
  1. If you haven\u2019t used a TensorFlow-GPU set-up before, I suggest first setting everything up with TensorFlow 1.0 or 1.1, where you can still do pip install tensorflow-gpu. Once you get that working, the CUDA set-up would also work if you\u2019re compiling TensorFlow. If you have an external GPU, YellowPillow's answer (or mine) might help you get things set up.
  2. \n
  3. Follow the official tutorial \u201cInstalling TensorFlow from Sources\u201d, but obviously substitute git checkout r1.0 with git checkout r1.2.\nWhen doing ./configure, pay attention to the Python library path: it sometimes suggests an incorrect one. I chose the default options in most cases, except for: Python library path, CUDA support and compute capacity. Don\u2019t use Clang as the CUDA compiler: this will lead you to an error \u201cInconsistent crosstool configuration; no toolchain corresponding to 'local_darwin' found for cpu 'darwin'.\u201d. Using /usr/bin/gcc as your compiler will actually use Clang that comes with macOS / XCode. Below is my full configuration.
  4. \n
  5. TensorFlow 1.2 expects a C library called OpenMP, which is not available in the current Apple Clang. It should speed up multithreaded TensorFlow on multi-CPU machines, but it will also compile without it. We could try to build TensorFlow with gcc 4 (which I didn\u2019t manage), or simply remove the line that includes OpenMP from the build file. In my case I commented out line 98 of tensorflow/third_party/gpus/cuda/BUILD.tpl, which contained linkopts = [\u201c-lgomp\u201d] (but the location of the line might obviously change). Some people had issues with zmuldefs, but I assume that was with earlier versions; thanks to udnaan for pointing out that it\u2019s OK to comment out these lines.
  6. \n
  7. I had some problems building with the latest bazel 0.5.3, so I reverted to using 0.4.5 that I already had installed. But some discussion in a github issue mentioned bazel 0.5.2 also didn\u2019t have the problem.
  8. \n
  9. Now build with bazel and finish the installation as instructed by the official install guide. On my 3.2 GHz iMac this took about 37 minutes.
  10. \n
\n\n
\n

Using python library path: /Users/m/code/3rd/conda/envs/p3gpu/lib/python3.6/site-packages

\n \n

Do you wish to build TensorFlow with MKL support? [y/N] N

\n \n

No MKL support will be enabled for TensorFlow

\n \n

Please specify optimization flags to use during compilation when bazel option \"--config=opt\" is specified [Default is -march=native]:

\n \n

Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]

\n \n

No Google Cloud Platform support will be enabled for TensorFlow

\n \n

Do you wish to build TensorFlow with Hadoop File System support? [y/N]

\n \n

No Hadoop File System support will be enabled for TensorFlow

\n \n

Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N]

\n \n

No XLA support will be enabled for TensorFlow

\n \n

Do you wish to build TensorFlow with VERBS support? [y/N]

\n \n

No VERBS support will be enabled for TensorFlow

\n \n

Do you wish to build TensorFlow with OpenCL support? [y/N]

\n \n

No OpenCL support will be enabled for TensorFlow

\n \n

Do you wish to build TensorFlow with CUDA support? [y/N] y

\n \n

CUDA support will be enabled for TensorFlow

\n \n

Do you want to use clang as CUDA compiler? [y/N]

\n \n

nvcc will be used as CUDA compiler

\n \n

Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]:

\n \n

Please specify the location where CUDA toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:

\n \n

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:

\n \n

Please specify the cuDNN version you want to use. [Leave empty to use system default]:

\n \n

Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:

\n \n

Please specify a list of comma-separated Cuda compute capabilities you want to build with.

\n \n

You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.

\n \n

Please note that each additional compute capability significantly increases your build time and binary size.

\n \n

[Default is: \"3.5,5.2\"]: 6.1

\n \n

INFO: Starting clean (this may take a while). Consider using --async if the clean takes more than several minutes.

\n \n

Configuration finished

\n
\n", + "system": "" + }, + { + "instruction": "How to load only specific weights on Keras", + "input": "", + "output": "

If your first 9 layers are consistently named between your original trained model and the new model, then you can use model.load_weights() with by_name=True. This will update weights only in the layers of your new model that have an identically named layer found in the original trained model.

\n\n

The name of the layer can be specified with the name keyword, for example:

\n\n
model.add(Dense(8, activation='relu',name='dens_1'))\n
\n", + "system": "" + }, + { + "instruction": "How do Monitored Training Sessions work?", + "input": "", + "output": "

I can't give some insights on how these classes were created, but here are a few things which I think are relevants on how you could use them.

\n\n

The tf.Session is a low level object in the python TensorFlow API while,\nas you said, the tf.train.MonitoredTrainingSession comes with a lot of handy features, especially useful in most of the common cases.

\n\n

Before describing some of the benefits of tf.train.MonitoredTrainingSession, let me answer the question about the graph used by the session. You can specify the tf.Graph used by the MonitoredTrainingSession by using a context manager with your_graph.as_default():

\n\n
from __future__ import print_function\nimport tensorflow as tf\n\ndef example():\n    g1 = tf.Graph()\n    with g1.as_default():\n        # Define operations and tensors in `g`.\n        c1 = tf.constant(42)\n        assert c1.graph is g1\n\n    g2 = tf.Graph()\n    with g2.as_default():\n        # Define operations and tensors in `g`.\n        c2 = tf.constant(3.14)\n        assert c2.graph is g2\n\n    # MonitoredTrainingSession example\n    with g1.as_default():\n        with tf.train.MonitoredTrainingSession() as sess:\n            print(c1.eval(session=sess))\n            # Next line raises\n            # ValueError: Cannot use the given session to evaluate tensor:\n            # the tensor's graph is different from the session's graph.\n            try:\n                print(c2.eval(session=sess))\n            except ValueError as e:\n                print(e)\n\n    # Session example\n    with tf.Session(graph=g2) as sess:\n        print(c2.eval(session=sess))\n        # Next line raises\n        # ValueError: Cannot use the given session to evaluate tensor:\n        # the tensor's graph is different from the session's graph.\n        try:\n            print(c1.eval(session=sess))\n        except ValueError as e:\n            print(e)\n\nif __name__ == '__main__':\n    example()\n
\n\n

So, as you said, the benefits of using MonitoredTrainingSession are that, this object takes care of

\n\n\n\n

but it has also the benefit of making your code easy to distribute as it also works differently depending if you specified the running process as a master or not.

\n\n

For example you could run something like:

\n\n
def run_my_model(train_op, session_args):\n    with tf.train.MonitoredTrainingSession(**session_args) as sess:\n        sess.run(train_op)\n
\n\n

that you would call in a non-distributed way:

\n\n
run_my_model(train_op, {})`\n
\n\n

or in a distributed way (see the distributed doc for more information on the inputs):

\n\n
run_my_model(train_op, {\"master\": server.target,\n                        \"is_chief\": (FLAGS.task_index == 0)})\n
\n\n

On the other hand, the benefit of using the raw tf.Session object is that, you don't have the extra benefits of tf.train.MonitoredTrainingSession, which can be useful if you don't plan to use them or if you want to get more control (for example on how the queues are started).

\n\n

EDIT (as per comment):\nFor the op initialisation, you would have to do something like (cf. official doc:

\n\n
# Define your graph and your ops\ninit_op = tf.global_variables_initializer()\nwith tf.Session() as sess:\n    sess.run(init_p)\n    sess.run(your_graph_ops,...)\n
\n\n

For the QueueRunner, I would refer you to the official doc where you will find more complete examples.

\n\n

EDIT2:

\n\n

The main concept to understand to get a sense on how tf.train.MonitoredTrainingSession works is the _WrappedSession class:

\n\n
\n

This wrapper is used as a base class for various session wrappers\n that provide additional functionality such as monitoring, coordination,\n and recovery.

\n
\n\n

The tf.train.MonitoredTrainingSession works (as of version 1.1) this way:

\n\n\n\n

In conclusion, the tf.train.MonitoredTrainingSession avoids a lot of boiler plate code while being easily extendable with the hooks mechanism.

\n", + "system": "" + }, + { + "instruction": "Should TensorFlow users prefer SavedModel over Checkpoint or GraphDef?", + "input": "", + "output": "

A checkpoint contains the value of (some of the) variables in a TensorFlow model. It is created by a Saver, which is either given specific Variables to save, or by default saves all (non-local) Variables.

\n\n

To use a checkpoint, you need to have a compatible TensorFlow Graph, whose Variables have the same names as the Variables in the checkpoint. (If you don't have a compatible Graph, you can still load the values stored in a checkpoint into selected Variables using the init_from_checkpoint utilities in contrib.)

\n\n

SavedModel is much more comprehensive: It contains a set of Graphs (MetaGraphs, in fact, saving collections and such), as well as a checkpoint which is supposed to be compatible with these Graphs, and any asset files that are needed to run the model (e.g. Vocabulary files). For each MetaGraph it contains, it also stores a set of signatures. Signatures define (named) input and output tensors.

\n\n

This means that given only a SavedModel, you can write tools (such as tensorflow/serving, or the new saved_model command line utility that will appear in tools/ shortly) that interpret or execute the graphs inside. All you have to provide is the data.

\n\n

If in doubt, I would always err on the side of writing a SavedModel, not just a checkpoint. Not only does this allow you to use tensorflow/serving (and other neat utilities that will grow in number), it makes sure that you have all the information necessary to run the model. Nothing is more frustrating than a checkpoint you cannot use any more because you modified your model and now it is incompatible with checkpoint files and all you want to do is run some predictions through it for comparison.

\n", + "system": "" + }, + { + "instruction": "Convert TensorFlow string to python string", + "input": "", + "output": "

In tensorflow 2.0.0, it can be done in the following way:

\n\n
import tensorflow as tf\n\nmy_str = tf.constant('Hello World')\nmy_str_npy = my_str.numpy()\n\nprint(my_str_npy)\ntype(my_str_npy)\n
\n\n

This converts a string tensor into a string of 'bytes' class

\n", + "system": "" + }, + { + "instruction": "'module' object has no attribute 'SummaryWriter'", + "input": "", + "output": "

tf.train.SummaryWriter is deprecated, instead use tf.summary.FileWriter.

\n\n

\u21b3 Adding Summaries to Event Files

\n\n
\n

It will be removed after 2016-11-30.\n Instructions for updating: Please switch to tf.summary.FileWriter. \n The interface and behavior is the same; this is just a rename.

\n
\n\n

<TF Official Migration Page> \u2733\ufe0e includes all current deprecated/renamed functions \u2733\ufe0e

\n", + "system": "" + }, + { + "instruction": "Understanding tf.extract_image_patches for extracting patches from an image", + "input": "", + "output": "

Here is how the method works:

\n\n

Here is some sample code with output to help demonstrate how it works:

\n
import tensorflow as tf\n\nn = 10\n# images is a 1 x 10 x 10 x 1 array that contains the numbers 1 through 100 in order\nimages = [[[[x * n + y + 1] for y in range(n)] for x in range(n)]]\n\n# We generate four outputs as follows:\n# 1. 3x3 patches with stride length 5\n# 2. Same as above, but the rate is increased to 2\n# 3. 4x4 patches with stride length 7; only one patch should be generated\n# 4. Same as above, but with padding set to 'SAME'\nwith tf.Session() as sess:\n  print tf.extract_image_patches(images=images, ksizes=[1, 3, 3, 1], strides=[1, 5, 5, 1], rates=[1, 1, 1, 1], padding='VALID').eval(), '\\n\\n'\n  print tf.extract_image_patches(images=images, ksizes=[1, 3, 3, 1], strides=[1, 5, 5, 1], rates=[1, 2, 2, 1], padding='VALID').eval(), '\\n\\n'\n  print tf.extract_image_patches(images=images, ksizes=[1, 4, 4, 1], strides=[1, 7, 7, 1], rates=[1, 1, 1, 1], padding='VALID').eval(), '\\n\\n'\n  print tf.extract_image_patches(images=images, ksizes=[1, 4, 4, 1], strides=[1, 7, 7, 1], rates=[1, 1, 1, 1], padding='SAME').eval()\n
\n

Output:

\n
[[[[ 1  2  3 11 12 13 21 22 23]\n   [ 6  7  8 16 17 18 26 27 28]]\n\n  [[51 52 53 61 62 63 71 72 73]\n   [56 57 58 66 67 68 76 77 78]]]]\n\n\n[[[[  1   3   5  21  23  25  41  43  45]\n   [  6   8  10  26  28  30  46  48  50]]\n\n  [[ 51  53  55  71  73  75  91  93  95]\n   [ 56  58  60  76  78  80  96  98 100]]]]\n\n\n[[[[ 1  2  3  4 11 12 13 14 21 22 23 24 31 32 33 34]]]]\n\n\n[[[[  1   2   3   4  11  12  13  14  21  22  23  24  31  32  33  34]\n   [  8   9  10   0  18  19  20   0  28  29  30   0  38  39  40   0]]\n\n  [[ 71  72  73  74  81  82  83  84  91  92  93  94   0   0   0   0]\n   [ 78  79  80   0  88  89  90   0  98  99 100   0   0   0   0   0]]]]\n
\n

So, for example, our first result looks like the following:

\n
 *  *  *  4  5  *  *  *  9 10 \n *  *  * 14 15  *  *  * 19 20 \n *  *  * 24 25  *  *  * 29 30 \n31 32 33 34 35 36 37 38 39 40 \n41 42 43 44 45 46 47 48 49 50 \n *  *  * 54 55  *  *  * 59 60 \n *  *  * 64 65  *  *  * 69 70 \n *  *  * 74 75  *  *  * 79 80 \n81 82 83 84 85 86 87 88 89 90 \n91 92 93 94 95 96 97 98 99 100 \n
\n

As you can see, we have 2 rows and 2 columns worth of patches, which are what out_rows and out_cols are.

\n", + "system": "" + }, + { + "instruction": "How do you get the name of the tensorflow output nodes in a Keras Model?", + "input": "", + "output": "

You can use Keras model.summary() to get the name of the last layer.

\n\n

If model.outputs is not empty you can get the node names via:

\n\n
[node.op.name for node in model.outputs]\n
\n\n

you get the session via

\n\n
session = keras.backend.get_session()\n
\n\n

and you convert all training variables to consts via

\n\n
min_graph = convert_variables_to_constants(session, session.graph_def, [node.op.name for node in model.outputs])\n
\n\n

after that you can write a protobuf-file via

\n\n
tensorflow.train.write_graph(min_graph, \"/logdir/\", \"file.pb\", as_text=True)\n
\n", + "system": "" + }, + { + "instruction": "In TensorFlow, how can I get nonzero values and their indices from a tensor with python?", + "input": "", + "output": "

You can achieve same result in Tensorflow using not_equal and where methods.

\n\n
zero = tf.constant(0, dtype=tf.float32)\nwhere = tf.not_equal(A, zero)\n
\n\n

where is a tensor of the same shape as A holding True or False, in the following case

\n\n
[[True, False],\n [False, True]]\n
\n\n

This would be sufficient to select zero or non-zero elements from A. If you want to obtain indices you can use wheremethod as follows:

\n\n
indices = tf.where(where)\n
\n\n

where tensor has two True values so indices tensor will have two entries. where tensor has rank of two, so entries will have two indices:

\n\n
[[0, 0],\n [1, 1]]\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow - Read all examples from a TFRecords at once?", + "input": "", + "output": "

Just for clarity, I have a few thousand images in a single .tfrecords file, they're 720 by 720 rgb png files. The labels are one of 0,1,2,3.

\n\n

I also tried using the parse_example and couldn't make it work but this solution works with the parse_single_example.

\n\n

The downside is that right now I have to know how many items are in each .tf record, which is kind of a bummer. If I find a better way, I'll update the answer. Also, be careful going out of bounds of the number of records in the .tfrecords file, it will start over at the first record if you loop past the last record

\n\n

The trick was to have the queue runner use a coordinator.

\n\n

I left some code in here to save the images as they're being read in so that you can verify the image is correct.

\n\n
from PIL import Image\nimport numpy as np\nimport tensorflow as tf\n\ndef read_and_decode(filename_queue):\n reader = tf.TFRecordReader()\n _, serialized_example = reader.read(filename_queue)\n features = tf.parse_single_example(\n  serialized_example,\n  # Defaults are not specified since both keys are required.\n  features={\n      'image_raw': tf.FixedLenFeature([], tf.string),\n      'label': tf.FixedLenFeature([], tf.int64),\n      'height': tf.FixedLenFeature([], tf.int64),\n      'width': tf.FixedLenFeature([], tf.int64),\n      'depth': tf.FixedLenFeature([], tf.int64)\n  })\n image = tf.decode_raw(features['image_raw'], tf.uint8)\n label = tf.cast(features['label'], tf.int32)\n height = tf.cast(features['height'], tf.int32)\n width = tf.cast(features['width'], tf.int32)\n depth = tf.cast(features['depth'], tf.int32)\n return image, label, height, width, depth\n\n\ndef get_all_records(FILE):\n with tf.Session() as sess:\n   filename_queue = tf.train.string_input_producer([ FILE ])\n   image, label, height, width, depth = read_and_decode(filename_queue)\n   image = tf.reshape(image, tf.pack([height, width, 3]))\n   image.set_shape([720,720,3])\n   init_op = tf.initialize_all_variables()\n   sess.run(init_op)\n   coord = tf.train.Coordinator()\n   threads = tf.train.start_queue_runners(coord=coord)\n   for i in range(2053):\n     example, l = sess.run([image, label])\n     img = Image.fromarray(example, 'RGB')\n     img.save( \"output/\" + str(i) + '-train.png')\n\n     print (example,l)\n   coord.request_stop()\n   coord.join(threads)\n\nget_all_records('/path/to/train-0.tfrecords')\n
\n", + "system": "" + }, + { + "instruction": "Working with multiple graphs in TensorFlow", + "input": "", + "output": "

Your product is a global variable, and you've set it to point to \"g2/MatMul\".

\n\n

In particular

\n\n

Try

\n\n
print product\n
\n\n

and you'll see

\n\n
Tensor(\"g2/MatMul:0\", shape=(1, 1), dtype=float32)\n
\n\n

So the system takes \"g2/MatMul:0\" since that's the Tensor's name, and tries to find it in the graph g1 since that's the graph you set for the session. Incidentally you can see all nodes in the graph print [n.name for n in g1.as_graph_def().node]

\n\n

Generally, using more than one graph is rarely useful. You can't merge them and can't pass tensors between them. I'd recommend just doing

\n\n
tf.reset_default_graph()\na = tf.Constant(2)\nsess = tf.InteractiveSession()\n....\n
\n\n

This way you'll have one default graph and one default session and you can omit specifying graph or session in most cases. If you ever need to refer to them explicitly, you can get them from tf.get_default_graph() or tf.get_default_session()

\n", + "system": "" + }, + { + "instruction": "Clarification on tf.Tensor.set_shape()", + "input": "", + "output": "

As far as I know (and I wrote that code), there isn't a bug in Tensor.set_shape(). I think the misunderstanding stems from the confusing name of that method.

\n\n

To elaborate on the FAQ entry you quoted, Tensor.set_shape() is a pure-Python function that improves the shape information for a given tf.Tensor object. By \"improves\", I mean \"makes more specific\".

\n\n

Therefore, when you have a Tensor object t with shape (?,), that is a one-dimensional tensor of unknown length. You can call t.set_shape((1028178,)), and then t will have shape (1028178,) when you call t.get_shape(). This doesn't affect the underlying storage, or indeed anything on the backend: it merely means that subsequent shape inference using t can rely on the assertion that it is a vector of length 1028178.

\n\n

If t has shape (?,), a call to t.set_shape((478, 717, 3)) will fail, because TensorFlow already knows that t is a vector, so it cannot have shape (478, 717, 3). If you want to make a new Tensor with that shape from the contents of t, you can use reshaped_t = tf.reshape(t, (478, 717, 3)). This creates a new tf.Tensor object in Python; the actual implementation of tf.reshape() does this using a shallow copy of the tensor buffer, so it is inexpensive in practice.

\n\n

One analogy is that Tensor.set_shape() is like a run-time cast in an object-oriented language like Java. For example, if you have a pointer to an Object but know that, in fact, it is a String, you might do the cast (String) obj in order to pass obj to a method that expects a String argument. However, if you have a String s and try to cast it to a java.util.Vector, the compiler will give you an error, because these two types are unrelated.

\n", + "system": "" + }, + { + "instruction": "Multilabel Text Classification using TensorFlow", + "input": "", + "output": "

Change relu to sigmoid of output layer.\nModify cross entropy loss to explicit mathematical formula of sigmoid cross entropy loss (explicit loss was working in my case/version of tensorflow )

\n\n
import tensorflow as tf\n\n# hidden Layer\nclass HiddenLayer(object):\n    def __init__(self, input, n_in, n_out):\n        self.input = input\n\n        w_h = tf.Variable(tf.random_normal([n_in, n_out],mean = 0.0,stddev = 0.05))\n        b_h = tf.Variable(tf.zeros([n_out]))\n\n        self.w = w_h\n        self.b = b_h\n        self.params = [self.w, self.b]\n\n    def output(self):\n        linarg = tf.matmul(self.input, self.w) + self.b\n        self.output = tf.nn.relu(linarg)\n\n        return self.output\n\n# output Layer\nclass OutputLayer(object):\n    def __init__(self, input, n_in, n_out):\n        self.input = input\n\n        w_o = tf.Variable(tf.random_normal([n_in, n_out], mean = 0.0, stddev = 0.05))\n        b_o = tf.Variable(tf.zeros([n_out]))\n\n        self.w = w_o\n        self.b = b_o\n        self.params = [self.w, self.b]\n\n    def output(self):\n        linarg = tf.matmul(self.input, self.w) + self.b\n        #changed relu to sigmoid\n        self.output = tf.nn.sigmoid(linarg)\n\n        return self.output\n\n# model\ndef model():\n    h_layer = HiddenLayer(input = x, n_in = 20000, n_out = 1000)\n    o_layer = OutputLayer(input = h_layer.output(), n_in = 1000, n_out = 4000)\n\n    # loss function\n    out = o_layer.output()\n    # modified cross entropy to explicit mathematical formula of sigmoid cross entropy loss\n    cross_entropy = -tf.reduce_sum( (  (y_*tf.log(out + 1e-9)) + ((1-y_) * tf.log(1 - out + 1e-9)) )  , name='xentropy' )    \n\n    # regularization\n    l2 = (tf.nn.l2_loss(h_layer.w) + tf.nn.l2_loss(o_layer.w))\n    lambda_2 = 0.01\n\n    # compute loss\n    loss = cross_entropy + lambda_2 * l2\n\n    # compute accuracy for single label classification task\n    correct_pred = tf.equal(tf.argmax(out, 1), tf.argmax(y, 1))\n    accuracy = tf.reduce_mean(tf.cast(correct_pred, \"float\"))\n\n    return loss, accuracy\n
\n", + "system": "" + }, + { + "instruction": "ImportError: cannot import name 'to_categorical' from 'keras.utils' (/usr/local/lib/python3.7/dist-packages/keras/utils/__init__.py)", + "input": "", + "output": "

use this

\n
from tensorflow.keras.utils import to_categorical\n
\n

instead of

\n

from keras.utils import to_categorical

\n", + "system": "" + }, + { + "instruction": "Extract target from Tensorflow PrefetchDataset", + "input": "", + "output": "

You can convert it to a list with list(ds) and then recompile it as a normal Dataset with tf.data.Dataset.from_tensor_slices(list(ds)). From there your nightmare begins again but at least it's a nightmare that other people have had before.

\n

Note that for more complex datasets (e.g. nested dictionaries) you will need more preprocessing after calling list(ds), but this should work for the example you asked about.

\n

This is far from a satisfying answer but unfortunately the class is entirely undocumented and none of the standard Dataset tricks work.

\n", + "system": "" + }, + { + "instruction": "Should I use @tf.function for all functions?", + "input": "", + "output": "

TLDR: It depends on your function and whether you are in production or development. Don't use tf.function if you want to be able to debug your function easily, or if it falls under the limitations of AutoGraph or tf.v1 code compatibility.\nI would highly recommend watching the Inside TensorFlow talks about AutoGraph and Functions, not Sessions.

\n\n

In the following I'll break down the reasons, which are all taken from information made available online by Google.

\n\n

In general, the tf.function decorator causes a function to be compiled as a callable that executes a TensorFlow graph. This entails:

\n\n\n\n

There is detailed information available on the design ideas behind this.

\n\n

Benefits of decorating a function with tf.function

\n\n

General benefits

\n\n\n\n

For functions with Python code / Using AutoGraph via tf.function decoration

\n\n

If you want to use AutoGraph, using tf.function is highly recommended over calling AutoGraph directly.\nReasons for this include: Automatic control dependencies, it is required for some APIs, more caching, and exception helpers (Source).

\n\n

Drawbacks of decorating a function with tf.function

\n\n

General drawbacks

\n\n\n\n

For functions with Python code / Using AutoGraph via tf.function decoration

\n\n\n\n

Detailed information on AutoGraph limitations is available.

\n\n

For functions with tf.v1 code

\n\n\n\n

For functions with tf.v2 code

\n\n\n\n

Examples of limitations

\n\n

Creating variables more than once

\n\n

It is not allowed to create variables more than once, such as v in the following example:

\n\n
@tf.function\ndef f(x):\n    v = tf.Variable(1)\n    return tf.add(x, v)\n\nf(tf.constant(2))\n\n# => ValueError: tf.function-decorated function tried to create variables on non-first call.\n
\n\n

In the following code, this is mitigated by making sure that self.v is only created once:

\n\n
class C(object):\n    def __init__(self):\n        self.v = None\n    @tf.function\n    def f(self, x):\n        if self.v is None:\n            self.v = tf.Variable(1)\n        return tf.add(x, self.v)\n\nc = C()\nprint(c.f(tf.constant(2)))\n\n# => tf.Tensor(3, shape=(), dtype=int32)\n
\n\n

Hidden side effects not captured by AutoGraph

\n\n

Changes such as to self.a in this example can't be hidden, which leads to an error since cross-function analysis is not done (yet) (Source):

\n\n
class C(object):\n    def change_state(self):\n        self.a += 1\n\n    @tf.function\n    def f(self):\n        self.a = tf.constant(0)\n        if tf.constant(True):\n            self.change_state() # Mutation of self.a is hidden\n        tf.print(self.a)\n\nx = C()\nx.f()\n\n# => InaccessibleTensorError: The tensor 'Tensor(\"add:0\", shape=(), dtype=int32)' cannot be accessed here: it is defined in another function or code block. Use return values, explicit Python locals or TensorFlow collections to access it. Defined in: FuncGraph(name=cond_true_5, id=5477800528); accessed from: FuncGraph(name=f, id=5476093776).\n
\n\n

Changes in plain sight are no problem:

\n\n
class C(object):\n    @tf.function\n    def f(self):\n        self.a = tf.constant(0)\n        if tf.constant(True):\n            self.a += 1 # Mutation of self.a is in plain sight\n        tf.print(self.a)\n\nx = C()\nx.f()\n\n# => 1\n
\n\n

Example of limitation due to TF control flow

\n\n

This if statement leads to an error because the value for else needs to be defined for TF control flow:

\n\n
@tf.function\ndef f(a, b):\n    if tf.greater(a, b):\n        return tf.constant(1)\n\n# If a <= b would return None\nx = f(tf.constant(3), tf.constant(2))   \n\n# => ValueError: A value must also be returned from the else branch. If a value is returned from one branch of a conditional a value must be returned from all branches.\n
\n", + "system": "" + }, + { + "instruction": "How to use Model.fit which supports generators (after fit_generator deprecation)", + "input": "", + "output": "

Model.fit_generator is deprecated starting from tensorflow 2.1.0 which is currently is in rc1.\nYou can find the documentation for tf-2.1.0-rc1 here: https://www.tensorflow.org/versions/r2.1/api_docs/python/tf/keras/Model#fit

\n\n

As you can see the first argument of the Model.fit can take a generator so just pass it your generator.

\n", + "system": "" + }, + { + "instruction": "How to train a model in nodejs (tensorflow.js)?", + "input": "", + "output": "

First of all, the images needs to be converted to tensors. The first approach would be to create a tensor containing all the features (respectively a tensor containing all the labels). This should the way to go only if the dataset contains few images.

\n
  const imageBuffer = await fs.readFile(feature_file);\n  tensorFeature = tfnode.node.decodeImage(imageBuffer) // create a tensor for the image\n\n  // create an array of all the features\n  // by iterating over all the images\n  tensorFeatures = tf.stack([tensorFeature, tensorFeature2, tensorFeature3])\n
\n

The labels would be an array indicating the type of each image

\n
 labelArray = [0, 1, 2] // maybe 0 for dog, 1 for cat and 2 for birds\n
\n

One needs now to create a hot encoding of the labels

\n
 tensorLabels = tf.oneHot(tf.tensor1d(labelArray, 'int32'), 3);\n
\n

Once there is the tensors, one would need to create the model for training. Here is a simple model.

\n
const model = tf.sequential();\nmodel.add(tf.layers.conv2d({\n  inputShape: [height, width, numberOfChannels], // numberOfChannels = 3 for colorful images and one otherwise\n  filters: 32,\n  kernelSize: 3,\n  activation: 'relu',\n}));\nmodel.add(tf.layers.flatten());\nmodel.add(tf.layers.dense({units: 3, activation: 'softmax'}));\n
\n

Then the model can be trained

\n
model.fit(tensorFeatures, tensorLabels)\n
\n

If the dataset contains a lot of images, one would need to create a tfDataset instead. This answer discusses why.

\n
const genFeatureTensor = image => {\n      const imageBuffer = await fs.readFile(feature_file);\n      return tfnode.node.decodeImage(imageBuffer)\n}\n\nconst labelArray = indice => Array.from({length: numberOfClasses}, (_, k) => k === indice ? 1 : 0)\n\nfunction* dataGenerator() {\n  const numElements = numberOfImages;\n  let index = 0;\n  while (index < numFeatures) {\n    const feature = genFeatureTensor(imagePath);\n    const label = tf.tensor1d(labelArray(classImageIndex))\n    index++;\n    yield {xs: feature, ys: label};\n  }\n}\n\nconst ds = tf.data.generator(dataGenerator).batch(1) // specify an appropriate batchsize;\n
\n

And use model.fitDataset(ds) to train the model

\n
\n

The above is for training in nodejs. To do such a processing in the browser, genFeatureTensor can be written as follow:

\n
function loadImage(url){\n  return new Promise((resolve, reject) => {\n    const im = new Image()\n        im.crossOrigin = 'anonymous'\n        im.src = 'url'\n        im.onload = () => {\n          resolve(im)\n        }\n   })\n}\n\ngenFeatureTensor = image => {\n  const img = await loadImage(image);\n  return tf.browser.fromPixels(image);\n}\n
\n

One word of caution is that doing heavy processing might block the main thread in the browser. This is where web workers come into play.

\n", + "system": "" + }, + { + "instruction": "ImportError: Could not find 'cudart64_100.dll", + "input": "", + "output": "

The simplest way to fix is to install the latest \u2018NVIDIA GPU Computing Toolkit\u2019, because if it's not there, you'll be missing the 'cudart64_100.dll' library.\nThe only issue is that the latest copy of CUDA has this particular library upgraded to 'cudart64_101.dll', while the latest TensorFlow still requires the older 'cudart64_100.dll'.\nAnyways, one way to deal with this issue is to install the latest CUDA + CUDA from September 2018 and then copy 'cudart64_100.dll' library from old install to the new one.

\n\n

Or just visit my site where I linked the 'cudart64_100.dll' library downloaded from the CUDA Toolkit 10.0 (Sept 2018), to make it easier to copy it into the latest CUDA directory.

\n\n

Here are some screenshots to illustrate the process: https://www.joe0.com/2019/10/19/how-resolve-tensorflow-2-0-error-could-not-load-dynamic-library-cudart64_100-dll-dlerror-cudart64_100-dll-not-found/

\n", + "system": "" + }, + { + "instruction": "ValueError: Duplicate plugins for name projector", + "input": "", + "output": "

If you have two versions of tensorboard installed in your system,you need to uninstall one of them.

\n

I was stuck this for hours but I finally fixed it using:

\n

Worked like a charm:\nhttps://github.com/pytorch/pytorch/issues/22676

\n
pip uninstall tb-nightly tensorboardX tensorboard\npip install tensorboard\n
\n", + "system": "" + }, + { + "instruction": "How to quantize all nodes except a particular one?", + "input": "", + "output": "

EDIT: the previous answer refered to Tensorflow Lite code. I updated it to refer to Tensorflow.

\n

Looking at the implementation of Tensorflow's quantize_weights, these are the instances where weights don't get quantized:

\n
    \n
  1. tensor that is not type float
  2. \n
  3. tensor that has fewer than 1024 weights (or another number specified by the parameter minimum_size)
  4. \n
\n

If you are able to modify nodes in the graph so that they are excluded by one of the above rules, then quantize, then revert the nodes to the pre-quantized state, you might be able to do this.

\n", + "system": "" + }, + { + "instruction": "how to store numpy arrays as tfrecord?", + "input": "", + "output": "

The function _floats_feature described in the Tensorflow-Guide expects a scalar (either float32 or float64) as input.

\n\n
def _float_feature(value):\n  \"\"\"Returns a float_list from a float / double.\"\"\"\n  return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))\n
\n\n

As you can see the inputted scalar is written into a list (value=[value]) which is subsequently given to tf.train.FloatList as input. tf.train.FloatList expects an iterator that outputs a float in each iteration (as the list does).

\n\n

If your feature is not a scalar but a vectur, _float_feature can be rewritten to pass the iterator directly to tf.train.FloatList (instead of putting it into a list first).

\n\n
def _float_array_feature(value):\n  return tf.train.Feature(float_list=tf.train.FloatList(value=value))\n
\n\n

However if your feature has two or more dimensions this solution does not work anymore. Like @mmry described in his answer in this case flattening your feature or splitting it into several one-dimensional features would be a solution. The disadvantage of these two approaches is that the information about the actual shape of the feature is lost if no further effort is invested.

\n\n

Another possibility to write an example message for a higher dimensional array is to convert the array into a byte string and then use the _bytes_feature function described in the Tensorflow-Guide to write the example message for it. The example message is then serialized and written into a TFRecord file.

\n\n
import tensorflow as tf\nimport numpy as np\n\ndef _bytes_feature(value):\n    \"\"\"Returns a bytes_list from a string / byte.\"\"\"\n    if isinstance(value, type(tf.constant(0))): # if value ist tensor\n        value = value.numpy() # get value of tensor\n    return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))\n\n\ndef serialize_array(array):\n  array = tf.io.serialize_tensor(array)\n  return array\n\n\n#----------------------------------------------------------------------------------\n# Create example data\narray_blueprint = np.arange(4, dtype='float64').reshape(2,2)\narrays = [array_blueprint+1, array_blueprint+2, array_blueprint+3]\n\n#----------------------------------------------------------------------------------\n# Write TFrecord file\nfile_path = 'data.tfrecords'\nwith tf.io.TFRecordWriter(file_path) as writer:\n  for array in arrays:\n    serialized_array = serialize_array(array)\n    feature = {'b_feature': _bytes_feature(serialized_array)}\n    example_message = tf.train.Example(features=tf.train.Features(feature=feature))\n    writer.write(example_message.SerializeToString())\n
\n\n

The serialized example messages stored in the TFRecord file can be accessed via tf.data.TFRecordDataset. After the example messages have been parsed, the original array needs to be restored from the byte string it was converted to. This is possible via tf.io.parse_tensor.

\n\n
# Read TFRecord file\ndef _parse_tfr_element(element):\n  parse_dic = {\n    'b_feature': tf.io.FixedLenFeature([], tf.string), # Note that it is tf.string, not tf.float32\n    }\n  example_message = tf.io.parse_single_example(element, parse_dic)\n\n  b_feature = example_message['b_feature'] # get byte string\n  feature = tf.io.parse_tensor(b_feature, out_type=tf.float64) # restore 2D array from byte string\n  return feature\n\n\ntfr_dataset = tf.data.TFRecordDataset('data.tfrecords') \nfor serialized_instance in tfr_dataset:\n  print(serialized_instance) # print serialized example messages\n\ndataset = tfr_dataset.map(_parse_tfr_element)\nfor instance in dataset:\n  print()\n  print(instance) # print parsed example messages with restored arrays\n
\n", + "system": "" + }, + { + "instruction": "Create keras callback to save model predictions and targets for each batch during training", + "input": "", + "output": "\n\n

NOTE: this answer is outdated and only works with TF1. Check @bers's answer for a solution tested on TF2.

\n\n
\n\n

After model compilation, the placeholder tensor for y_true is in model.targets and y_pred is in model.outputs.

\n\n

To save the values of these placeholders at each batch, you can:

\n\n
    \n
  1. First copy the values of these tensors into variables.
  2. \n
  3. Evaluate these variables in on_batch_end, and store the resulting arrays.
  4. \n
\n\n

Now step 1 is a bit involved because you'll have to add an tf.assign op to the training function model.train_function. Using current Keras API, this can be done by providing a fetches argument to K.function() when the training function is constructed.

\n\n

In model._make_train_function(), there's a line:

\n\n
self.train_function = K.function(inputs,\n                                 [self.total_loss] + self.metrics_tensors,\n                                 updates=updates,\n                                 name='train_function',\n                                 **self._function_kwargs)\n
\n\n

The fetches argument containing the tf.assign ops can be provided via model._function_kwargs (only works after Keras 2.1.0).

\n\n

As an example:

\n\n
from keras.layers import Dense\nfrom keras.models import Sequential\nfrom keras.callbacks import Callback\nfrom keras import backend as K\nimport tensorflow as tf\nimport numpy as np\n\nclass CollectOutputAndTarget(Callback):\n    def __init__(self):\n        super(CollectOutputAndTarget, self).__init__()\n        self.targets = []  # collect y_true batches\n        self.outputs = []  # collect y_pred batches\n\n        # the shape of these 2 variables will change according to batch shape\n        # to handle the \"last batch\", specify `validate_shape=False`\n        self.var_y_true = tf.Variable(0., validate_shape=False)\n        self.var_y_pred = tf.Variable(0., validate_shape=False)\n\n    def on_batch_end(self, batch, logs=None):\n        # evaluate the variables and save them into lists\n        self.targets.append(K.eval(self.var_y_true))\n        self.outputs.append(K.eval(self.var_y_pred))\n\n# build a simple model\n# have to compile first for model.targets and model.outputs to be prepared\nmodel = Sequential([Dense(5, input_shape=(10,))])\nmodel.compile(loss='mse', optimizer='adam')\n\n# initialize the variables and the `tf.assign` ops\ncbk = CollectOutputAndTarget()\nfetches = [tf.assign(cbk.var_y_true, model.targets[0], validate_shape=False),\n           tf.assign(cbk.var_y_pred, model.outputs[0], validate_shape=False)]\nmodel._function_kwargs = {'fetches': fetches}  # use `model._function_kwargs` if using `Model` instead of `Sequential`\n\n# fit the model and check results\nX = np.random.rand(10, 10)\nY = np.random.rand(10, 5)\nmodel.fit(X, Y, batch_size=8, callbacks=[cbk])\n
\n\n

Unless the number of samples can be divided by the batch size, the final batch will have a different size than other batches. So K.variable() and K.update() can't be used in this case. You'll have to use tf.Variable(..., validate_shape=False) and tf.assign(..., validate_shape=False) instead.

\n\n
\n\n

To verify the correctness of the saved arrays, you can add one line in training.py to print out the shuffled index array:

\n\n
if shuffle == 'batch':\n    index_array = _batch_shuffle(index_array, batch_size)\nelif shuffle:\n    np.random.shuffle(index_array)\n\nprint('Index array:', repr(index_array))  # Add this line\n\nbatches = _make_batches(num_train_samples, batch_size)\n
\n\n

The shuffled index array should be printed out during fitting:

\n\n
\nEpoch 1/1\nIndex array: array([8, 9, 3, 5, 4, 7, 1, 0, 6, 2])\n10/10 [==============================] - 0s 23ms/step - loss: 0.5670\n
\n\n

And you can check if cbk.targets is the same as Y[index_array]:

\n\n
index_array = np.array([8, 9, 3, 5, 4, 7, 1, 0, 6, 2])\nprint(Y[index_array])\n[[ 0.75325592  0.64857277  0.1926653   0.7642865   0.38901153]\n [ 0.77567689  0.13573623  0.4902501   0.42897559  0.55825652]\n [ 0.33760938  0.68195038  0.12303088  0.83509441  0.20991668]\n [ 0.98367778  0.61325065  0.28973401  0.28734073  0.93399794]\n [ 0.26097574  0.88219054  0.87951941  0.64887846  0.41996446]\n [ 0.97794604  0.91307569  0.93816428  0.2125808   0.94381495]\n [ 0.74813435  0.08036688  0.38094272  0.83178364  0.16713736]\n [ 0.52609421  0.39218962  0.21022047  0.58569125  0.08012982]\n [ 0.61276627  0.20679494  0.24124858  0.01262245  0.0994412 ]\n [ 0.6026137   0.25620512  0.7398164   0.52558182  0.09955769]]\n\nprint(cbk.targets)\n[array([[ 0.7532559 ,  0.64857274,  0.19266529,  0.76428652,  0.38901153],\n        [ 0.77567691,  0.13573623,  0.49025011,  0.42897558,  0.55825651],\n        [ 0.33760938,  0.68195039,  0.12303089,  0.83509439,  0.20991668],\n        [ 0.9836778 ,  0.61325067,  0.28973401,  0.28734073,  0.93399793],\n        [ 0.26097575,  0.88219053,  0.8795194 ,  0.64887846,  0.41996446],\n        [ 0.97794604,  0.91307569,  0.93816429,  0.2125808 ,  0.94381493],\n        [ 0.74813437,  0.08036689,  0.38094273,  0.83178365,  0.16713737],\n        [ 0.5260942 ,  0.39218962,  0.21022047,  0.58569127,  0.08012982]], dtype=float32),\n array([[ 0.61276627,  0.20679495,  0.24124858,  0.01262245,  0.0994412 ],\n        [ 0.60261369,  0.25620511,  0.73981643,  0.52558184,  0.09955769]], dtype=float32)]\n
\n\n

As you can see, there are two batches in cbk.targets (one \"full batch\" of size 8 and the final batch of size 2), and the row order is the same as Y[index_array].

\n", + "system": "" + }, + { + "instruction": "What is the difference of static Computational Graphs in tensorflow and dynamic Computational Graphs in Pytorch?", + "input": "", + "output": "

Both frameworks operate on tensors and view any model as a directed acyclic graph (DAG), but they differ drastically on how you can define them.

\n\n

TensorFlow follows \u2018data as code and code is data\u2019 idiom. In TensorFlow you define graph statically before a model can run. All communication with outer world is performed via tf.Session object and tf.Placeholder which are tensors that will be substituted by external data at runtime.

\n\n

In PyTorch things are way more imperative and dynamic: you can define, change and execute nodes as you go, no special session interfaces or placeholders. Overall, the framework is more tightly integrated with Python language and feels more native most of the times. When you write in TensorFlow sometimes you feel that your model is behind a brick wall with several tiny holes to communicate over. Anyways, this still sounds like a matter of taste more or less.

\n\n

However, those approaches differ not only in a software engineering perspective: there are several dynamic neural network architectures that can benefit from the dynamic approach. Recall RNNs: with static graphs, the input sequence length will stay constant. This means that if you develop a sentiment analysis model for English sentences you must fix the sentence length to some maximum value and pad all smaller sequences with zeros. Not too convenient, huh. And you will get more problems in the domain of recursive RNNs and tree-RNNs. Currently Tensorflow has limited support for dynamic inputs via Tensorflow Fold. PyTorch has it by-default.

\n\n

Reference:

\n\n

https://medium.com/towards-data-science/pytorch-vs-tensorflow-spotting-the-difference-25c75777377b

\n\n

https://www.reddit.com/r/MachineLearning/comments/5w3q74/d_so_pytorch_vs_tensorflow_whats_the_verdict_on/

\n", + "system": "" + }, + { + "instruction": "Difference between `apply_gradients` and `minimize` of optimizer in tensorflow", + "input": "", + "output": "

You can easily know from the link : https://www.tensorflow.org/get_started/get_started\n(tf.train API part) that they actually do the same job.\nThe difference it that: if you use the separated functions( tf.gradients, tf.apply_gradients), you can apply other mechanism between them, such as gradient clipping.

\n", + "system": "" + }, + { + "instruction": "How to turn off dropout for testing in Tensorflow?", + "input": "", + "output": "

The easiest way is to change the keep_prob parameter using a placeholder_with_default:

\n\n
prob = tf.placeholder_with_default(1.0, shape=())\nlayer = tf.nn.dropout(layer, prob)\n
\n\n

in this way when you train you can set the parameter like this:

\n\n
sess.run(train_step, feed_dict={prob: 0.5})\n
\n\n

and when you evaluate the default value of 1.0 is used.

\n", + "system": "" + }, + { + "instruction": "Are tf.layers.dense() and tf.contrib.layers.fully_connected() interchangeable?", + "input": "", + "output": "

They are essentially the same, the later calling the former.

\n\n

However tf.contrib.fully_connected adds a few functionalities on top of dense, in particular the possibility to pass a normalization and an activation in the parameters, \u00e0 la Keras. As noted by @wordforthewise, mind that the later defaults to tf.nn.relu.

\n\n

More generally, the TF API proposes (and mixes somewhat confusingly) low- and hi-level APIs; more on that here.

\n", + "system": "" + }, + { + "instruction": "What do the options in ConfigProto like allow_soft_placement and log_device_placement mean?", + "input": "", + "output": "

If you look at the API of ConfigProto, on line 278, you will see this:

\n\n
  // Whether soft placement is allowed. If allow_soft_placement is true,\n  // an op will be placed on CPU if\n  //   1. there's no GPU implementation for the OP\n  // or\n  //   2. no GPU devices are known or registered\n  // or\n  //   3. need to co-locate with reftype input(s) which are from CPU.\n  bool allow_soft_placement = 7;\n
\n\n

What this really means is that if you do something like this without allow_soft_placement=True, TensorFlow will throw an error.

\n\n
with tf.device('/gpu:0'):\n    # some op that doesn't have a GPU implementation\n
\n\n

Right below it, you will see on line 281:

\n\n
  // Whether device placements should be logged.\n  bool log_device_placement = 8;\n
\n\n

When log_device_placement=True, you will get a verbose output of something like this:

\n\n
2017-07-03 01:13:59.466748: I tensorflow/core/common_runtime/simple_placer.cc:841] Placeholder_1: (Placeholder)/job:localhost/replica:0/task:0/cpu:0\nPlaceholder: (Placeholder): /job:localhost/replica:0/task:0/cpu:0\n2017-07-03 01:13:59.466765: I tensorflow/core/common_runtime/simple_placer.cc:841] Placeholder: (Placeholder)/job:localhost/replica:0/task:0/cpu:0\nVariable/initial_value: (Const): /job:localhost/replica:0/task:0/cpu:0\n2017-07-03 01:13:59.466783: I tensorflow/core/common_runtime/simple_placer.cc:841] Variable/initial_value: (Const)/job:localhost/replica:0/task:0/cpu:0\n
\n\n

You can see where each operation is mapped to. For this case, they are all mapped to /cpu:0, but if you're in a distributed setting, there would be many more devices.

\n", + "system": "" + }, + { + "instruction": "Error: Failed to load the native TensorFlow runtime", + "input": "", + "output": "

My code worked perfectly after executing this line:

\n
pip install tensorflow --upgrade --force-reinstall\n
\n", + "system": "" + }, + { + "instruction": "Output from TensorFlow `py_func` has unknown rank/shape", + "input": "", + "output": "

Since py_func can execute arbitrary Python code and output anything, TensorFlow can't figure out the shape (it would require analyzing Python code of function body) You can instead give the shape manually

\n\n
y.set_shape(inp.get_shape())\n
\n", + "system": "" + }, + { + "instruction": "What is an epoch in TensorFlow?", + "input": "", + "output": "

An epoch, in Machine Learning, is the entire processing by the learning algorithm of the entire train-set.

\n\n

The MNIST train set is composed by 55000 samples.\nOnce the algorithm processed all those 55000 samples an epoch is passed.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: How to convert scalar tensor to scalar variable in python?", + "input": "", + "output": "

In Tensorflow 2.0+, it's as simple as:

\n\n
my_tensor.numpy()\n
\n", + "system": "" + }, + { + "instruction": "How to fix MatMul Op has type float64 that does not match type float32 TypeError?", + "input": "", + "output": "

The tf.matmul() op does not perform automatic type conversions, so both of its inputs must have the same element type. The error message you are seeing indicates that you have a call to tf.matmul() where the first argument has type tf.float32, and the second argument has type tf.float64. You must convert one of the inputs to match the other, for example using tf.cast(x, tf.float32).

\n\n

Looking at your code, I don't see anywhere that a tf.float64 tensor is explicitly created (the default dtype for floating-point values in the TensorFlow Python API—e.g. for tf.constant(37.0)—is tf.float32). I would guess that the errors are caused by the np.loadtxt(filename) calls, which might be loading an np.float64 array. You can explicitly change them to load np.float32 arrays (which are converted to tf.float32 tensors) as follows:

\n\n
initial = np.loadtxt(filename).astype(np.float32)\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow Different ways to Export and Run graph in C++", + "input": "", + "output": "

Here's my solution utilizing the V2 checkpoints introduced in TF 0.12.

\n\n

There's no need to convert all variables to constants or freeze the graph.

\n\n

Just for clarity, a V2 checkpoint looks like this in my directory models:

\n\n
checkpoint  # some information on the name of the files in the checkpoint\nmy-model.data-00000-of-00001  # the saved weights\nmy-model.index  # probably definition of data layout in the previous file\nmy-model.meta  # protobuf of the graph (nodes and topology info)\n
\n\n

Python part (saving)

\n\n
with tf.Session() as sess:\n    tf.train.Saver(tf.trainable_variables()).save(sess, 'models/my-model')\n
\n\n

If you create the Saver with tf.trainable_variables(), you can save yourself some headache and storage space. But maybe some more complicated models need all data to be saved, then remove this argument to Saver, just make sure you're creating the Saver after your graph is created. It is also very wise to give all variables/layers unique names, otherwise you can run in different problems.

\n\n

Python part (inference)

\n\n
with tf.Session() as sess:\n    saver = tf.train.import_meta_graph('models/my-model.meta')\n    saver.restore(sess, tf.train.latest_checkpoint('models/'))\n    outputTensors = sess.run(outputOps, feed_dict=feedDict)\n
\n\n

C++ part (inference)

\n\n

Note that checkpointPath isn't a path to any of the existing files, just their common prefix. If you mistakenly put there path to the .index file, TF won't tell you that was wrong, but it will die during inference due to uninitialized variables.

\n\n
#include <tensorflow/core/public/session.h>\n#include <tensorflow/core/protobuf/meta_graph.pb.h>\n\nusing namespace std;\nusing namespace tensorflow;\n\n...\n// set up your input paths\nconst string pathToGraph = \"models/my-model.meta\"\nconst string checkpointPath = \"models/my-model\";\n...\n\nauto session = NewSession(SessionOptions());\nif (session == nullptr) {\n    throw runtime_error(\"Could not create Tensorflow session.\");\n}\n\nStatus status;\n\n// Read in the protobuf graph we exported\nMetaGraphDef graph_def;\nstatus = ReadBinaryProto(Env::Default(), pathToGraph, &graph_def);\nif (!status.ok()) {\n    throw runtime_error(\"Error reading graph definition from \" + pathToGraph + \": \" + status.ToString());\n}\n\n// Add the graph to the session\nstatus = session->Create(graph_def.graph_def());\nif (!status.ok()) {\n    throw runtime_error(\"Error creating graph: \" + status.ToString());\n}\n\n// Read weights from the saved checkpoint\nTensor checkpointPathTensor(DT_STRING, TensorShape());\ncheckpointPathTensor.scalar<std::string>()() = checkpointPath;\nstatus = session->Run(\n        {{ graph_def.saver_def().filename_tensor_name(), checkpointPathTensor },},\n        {},\n        {graph_def.saver_def().restore_op_name()},\n        nullptr);\nif (!status.ok()) {\n    throw runtime_error(\"Error loading checkpoint from \" + checkpointPath + \": \" + status.ToString());\n}\n\n// and run the inference to your liking\nauto feedDict = ...\nauto outputOps = ...\nstd::vector<tensorflow::Tensor> outputTensors;\nstatus = session->Run(feedDict, outputOps, {}, &outputTensors);\n
\n", + "system": "" + }, + { + "instruction": "Tensor with unspecified dimension in tensorflow", + "input": "", + "output": "

As Ishamael says, all tensors have a static shape, which is known at graph construction time and accessible using Tensor.get_shape(); and a dynamic shape, which is only known at runtime and is accessible by fetching the value of the tensor, or passing it to an operator like tf.shape. In many cases, the static and dynamic shapes are the same, but they can be different - the static shape can be partially defined - in order allow the dynamic shape to vary from one step to the next.

\n\n

In your code normal_dist has a partially-defined static shape, because w_shape is a computed value. (TensorFlow sometimes attempts to evaluate\nthese computed values at graph construction time, but it gets stuck at tf.pack.) It infers the shape TensorShape([Dimension(None), Dimension(None)]), which means \"a matrix with an unknown number of rows and columns,\" because it knowns that w_shape is a vector of length 2, so the resulting normal_dist must be 2-dimensional.

\n\n

You have two options to deal with this. You can set the static shape as Ishamael suggests, but this requires you to know the shape at graph construction time. For example, the following may work:

\n\n
normal_dist.set_shape([input_data.get_shape()[1], labels.get_shape()[1]])\n
\n\n

Alternatively, you can pass validate_shape=False to the tf.Variable constructor. This allows you to create a variable with a partially-defined shape, but it limits the amount of static shape information that can be inferred later on in the graph.

\n", + "system": "" + }, + { + "instruction": "Printing the loss during TensorFlow training", + "input": "", + "output": "

You can fetch the value of cross_entropy by adding it to the list of arguments to sess.run(...). For example, your for-loop could be rewritten as follows:

\n\n
for i in range(100):\n    batch_xs, batch_ys = mnist.train.next_batch(100)\n    cross_entropy = -tf.reduce_sum(y_ * tf.log(y))\n    _, loss_val = sess.run([train_step, cross_entropy],\n                           feed_dict={x: batch_xs, y_: batch_ys})\n    print 'loss = ' + loss_val\n
\n\n

The same approach can be used to print the current value of a variable. Let's say, in addition to the value of cross_entropy, you wanted to print the value of a tf.Variable called W, you could do the following:

\n\n
for i in range(100):\n    batch_xs, batch_ys = mnist.train.next_batch(100)\n    cross_entropy = -tf.reduce_sum(y_ * tf.log(y))\n    _, loss_val, W_val = sess.run([train_step, cross_entropy, W],\n                                  feed_dict={x: batch_xs, y_: batch_ys})\n    print 'loss = %s' % loss_val\n    print 'W = %s' % W_val\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow libdevice not found. Why is it not found in the searched path?", + "input": "", + "output": "

The following worked for me. With error message:

\n
error: Can't find libdevice directory ${CUDA_DIR}/nvvm/libdevice\n
\n

Firstly I searched for nvvm directory and then verified that libdevice directory existed:

\n
$ find / -type d -name nvvm 2>/dev/null\n/usr/lib/cuda/nvvm\n$ cd /usr/lib/cuda/nvvm\n/usr/lib/cuda/nvvm$ ls\nlibdevice\n/usr/lib/cuda/nvvm$ cd libdevice\n/usr/lib/cuda/nvvm/libdevice$ ls\nlibdevice.10.bc\n
\n

Then I exported the environment variable:

\n
export XLA_FLAGS=--xla_gpu_cuda_data_dir=/usr/lib/cuda\n
\n

as shown by @Insectatorious above. This solved the error and I was able to run the code.

\n", + "system": "" + }, + { + "instruction": "ModuleNotFoundError: No module named 'tensorflow_core.estimator' for tensorflow 2.1.0", + "input": "", + "output": "

TL;DR: Just solved this issue by making sure that both tensorflow and tensorflow-estimator were in the same version. (in my case, I needed to downgrade tensorflow-estimator, so conda install tensorflow-estimator=2.1.0 solved it for me)

\n

As you may have noticed, some tensorflow versions do not play well with certain GPUs, so I would first check some of the available builds with conda search tensorflow; then I would make sure that the chosen tensorflow build can actually recognize my GPU (with tf.config.list_physical_devices('GPU')); finally, I would search for a matching tensorflow-estimator build with conda search tensorflow-estimator and only then install it with conda install tensorflow-estimator=<chosen version> -n <my_venv>.

\n

It should be noted, however, that all these steps are mostly useful if you have interest in using your GPU. If that is not the case, than upgrading both packages (or downgrading/upgrading them so their versions match) may be the way.

\n", + "system": "" + }, + { + "instruction": "tensorflow warning - Found untraced functions such as lstm_cell_6_layer_call_and_return_conditional_losses", + "input": "", + "output": "

I think this warning can be safely ignored as you can find the same warning even in a tutorial given by tensorflow. I often see this warning when saving custom models such as graph NNs. You should be good to go as long as you don't want to access those non-callable functions.

\n

However, if you're annoyed by this big chunk of text, you can suppress this warning by adding the following at the top of the code.

\n
import absl.logging\nabsl.logging.set_verbosity(absl.logging.ERROR)\n
\n", + "system": "" + }, + { + "instruction": "ValueError: Shapes (None, 1) and (None, 3) are incompatible", + "input": "", + "output": "

The first problem is with the LSTM input_shape. input_shape = (20,85,1).

\n\n

From the doc: https://keras.io/layers/recurrent/

\n\n

LSTM layer expects 3D tensor with shape (batch_size, timesteps, input_dim).

\n\n

model.add(tf.keras.layers.Dense(nb_classes, activation='softmax')) - this suggets you're doing a multi-class classification.

\n\n

So, you need your y_train and y_test have to be one-hot-encoded. That means they must have dimension (number_of_samples, 3), where 3 denotes number of classes.

\n\n

You need to apply tensorflow.keras.utils.to_categorical to them.

\n\n
y_train = to_categorical(y_train, 3)\ny_test = to_categorical(y_test, 3)\n
\n\n

ref: https://www.tensorflow.org/api_docs/python/tf/keras/utils/to_categorical

\n\n

tf.keras.callbacks.History() - this callback is automatically applied to every Keras model. The History object gets returned by the fit method of models.

\n\n

ref: https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/History

\n", + "system": "" + }, + { + "instruction": "how to programmatically determine available GPU memory with tensorflow?", + "input": "", + "output": "

This code will return free GPU memory in MegaBytes for each GPU:

\n
import subprocess as sp\nimport os\n\ndef get_gpu_memory():\n    command = "nvidia-smi --query-gpu=memory.free --format=csv"\n    memory_free_info = sp.check_output(command.split()).decode('ascii').split('\\n')[:-1][1:]\n    memory_free_values = [int(x.split()[0]) for i, x in enumerate(memory_free_info)]\n    return memory_free_values\n\nget_gpu_memory()\n
\n

This answer relies on nvidia-smi being installed (which is pretty much always the case for Nvidia GPUs) and therefore is limited to NVidia GPUs.

\n", + "system": "" + }, + { + "instruction": "Non-deterministic behavior of TensorFlow while_loop()", + "input": "", + "output": "

Most likely, your problem is stemming from seeding issues, make sure that you set a seed for both random.seed(), and for numpy.random.seed(). You'll need to seed both, as numpy's random seed is independent from the random random state.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: Multiple loss functions vs Multiple training ops", + "input": "", + "output": "

I want to make a subtle point that I don't think was made in previous answers.

\n\n

If you were using something like GradientDescentOptimizer, these would be very similar operations. That's because taking gradients is a linear operation, and the gradient of a sum is the same as the sum of the gradients.

\n\n

But, ADAM does something special: regardless of the scale of your loss, it scales the gradients so that they're always on the order of your learning rate. If you multiplied your loss by 1000, it wouldn't affect ADAM, because the change it would be normalized away.

\n\n

So, if your two losses are roughly the same magnitude, then it shouldn't make a difference. If one is much larger than the other, then keep in mind that summing before the minimization will essentially ignore the small one, while making two ops will spend equal effort minimizing both.

\n\n

I personally like dividing them up, which gives you more control over how much to focus on one loss or the other. For example, if it was multi-task learning, and one task was more important to get right than the other, two ops with different learning rates roughly accomplishes this.

\n", + "system": "" + }, + { + "instruction": "Keras replacing input layer", + "input": "", + "output": "

When you saved your model using:

\n\n
old_model.save('my_model.h5')\n
\n\n

it will save following:

\n\n
    \n
  1. The architecture of the model, allowing to create the model.
  2. \n
  3. The weights of the model.
  4. \n
  5. The training configuration of the model (loss, optimizer).
  6. \n
  7. The state of the optimizer, allowing training to resume from where you left before.
  8. \n
\n\n

So then, when you load the model:

\n\n
res50_model = load_model('my_model.h5')\n
\n\n

you should get the same model back, you can verify the same using:

\n\n
res50_model.summary()\nres50_model.get_weights()\n
\n\n

Now you can, pop the input layer and add your own using:

\n\n
res50_model.layers.pop(0)\nres50_model.summary()\n
\n\n

add new input layer:

\n\n
newInput = Input(batch_shape=(0,299,299,3))    # let us say this new InputLayer\nnewOutputs = res50_model(newInput)\nnewModel = Model(newInput, newOutputs)\n\nnewModel.summary()\nres50_model.summary()\n
\n", + "system": "" + }, + { + "instruction": "How to fix ipykernel_launcher.py: error: unrecognized arguments in jupyter?", + "input": "", + "output": "

A more elegant solution would be:

\n\n
args, unknown = parser.parse_known_args()\n
\n\n

instead of

\n\n
args = parser.parse_args()\n
\n", + "system": "" + }, + { + "instruction": "Difference between tf.data.Dataset.map() and tf.data.Dataset.apply()", + "input": "", + "output": "

The difference is that map will execute one function on every element of the Dataset separately, whereas apply will execute one function on the whole Dataset at once (such as group_by_window given as example in the documentation).

\n\n

The argument of apply is a function that takes a Dataset and returns a Dataset when the argument of map is a function that takes one element and returns one transformed element.

\n", + "system": "" + }, + { + "instruction": "Keras confusion about number of layers", + "input": "", + "output": "

Your first one consists of a 100 neurons input layer connected to one single output neuron

\n\n

Your second one consists of a 100 neurons input layer, one hidden layer of 32 neurons and one output layer of one single neuron.

\n\n

You have to think of your first layer as your input layer (with the same number of neurons as the dimenson, so 100 for you) connected to another layer with as many neuron as you specify (1 in your first case, 32 in the second one)

\n\n

In Keras what is useful is the command

\n\n
model.summary()\n
\n", + "system": "" + }, + { + "instruction": "How to install Tensorflow on Python 2.7 on Windows?", + "input": "", + "output": "

If you only need TensorFlow because of Keras and your are on Python 2.7.x, you can avoid installing Tensorflow(Google) and replace it by CNTK(Microsoft). According to Jeong-Yoon Lee CNTK is a lot (about 2 to 4 times) faster than TensorFlow for LSTM (Bidirectional LSTM on IMDb Data and Text Generation via LSTM), while speeds for other type of neural networks are close to each other.\nYour Keras code does not need to be modified (I checked it with 2 examples of Keras using TensorFlow and succesfully replaced TensorFlow with CNTK, without changing anything the Keras code.

\n\n

So how do you install it?

\n\n

-CPU-only version of CNTK:

\n\n
\n

pip install\n https://cntk.ai/PythonWheel/CPU-Only/cntk-2.4-cp27-cp27m-win_amd64.whl

\n
\n\n

-GPU version of CNTK:

\n\n
\n

pip install\n https://cntk.ai/PythonWheel/GPU/cntk-2.4-cp27-cp27m-win_amd64.whl

\n
\n\n

-Test CNTK install:

\n\n
\n

python -c \"import cntk; print(cntk.version)\"

\n
\n\n

-Install Keras: The Python Deep Learning library

\n\n
\n

pip install keras

\n
\n\n

-Enable CNTK as Keras back end iso TensorFlow

\n\n

modify the \"keras.json\" file under %USERPROFILE%/.keras

\n\n
{\n    \"epsilon\": 1e-07, \n    \"image_data_format\": \"channels_last\", \n    \"backend\": \"cntk\", \n    \"floatx\": \"float32\" \n}\n
\n", + "system": "" + }, + { + "instruction": "You must feed a value for placeholder tensor 'Placeholder' with dtype float", + "input": "", + "output": "

Some questions

\n\n

first
\nwhy you use sess = tf.InteractiveSession() and with tf.Session() as sess: at same time, just curious

\n\n

second\nwhat is your placeholder name x or images?
\nif name is x, {images: x_data...} won't feed x_data to x, it override(?) images
\nI think feed_dict should be {x: x_data...}

\n\n

if name is images,do you have two images in your program, placeholder and shuffle data, try to modify name of variable

\n", + "system": "" + }, + { + "instruction": "Keras verbose training progress bar writing a new line on each batch issue", + "input": "", + "output": "

I've added built-in support for keras in tqdm so you could use it instead (pip install "tqdm>=4.41.0"):

\n
from tqdm.keras import TqdmCallback\n...\nmodel.fit(..., verbose=0, callbacks=[TqdmCallback(verbose=2)])\n
\n

This turns off keras' progress (verbose=0), and uses tqdm instead. For the callback, verbose=2 means separate progressbars for epochs and batches. 1 means clear batch bars when done. 0 means only show epochs (never show batch bars).

\n

If there are problems with it please open an issue at https://github.com/tqdm/tqdm/issues

\n", + "system": "" + }, + { + "instruction": "what does the question mark in tensorflow shape mean?", + "input": "", + "output": "

It means that first dimension is not fixed in the graph and it can vary between run calls

\n", + "system": "" + }, + { + "instruction": "The print of string constant is always attached with 'b' inTensorFlow", + "input": "", + "output": "

Use sess.run(hello).decode() because it is a bytestring. decode method will return the string.

\n\n

Your print statement must look like

\n\n
print(sess.run(hello).decode())\n
\n", + "system": "" + }, + { + "instruction": "Obtaining total number of records from .tfrecords file in Tensorflow", + "input": "", + "output": "

To count the number of records, you should be able to use tf.python_io.tf_record_iterator.

\n\n
c = 0\nfor fn in tf_records_filenames:\n  for record in tf.python_io.tf_record_iterator(fn):\n     c += 1\n
\n\n

To just keep track of the model training, tensorboard comes in handy.

\n", + "system": "" + }, + { + "instruction": "Keras Maxpooling2d layer gives ValueError", + "input": "", + "output": "

Quoting an answer mentioned in github, you need to specify the dimension ordering:

\n\n

Keras is a wrapper over Theano or Tensorflow libraries. Keras uses the setting variable image_dim_ordering to decide if the input layer is Theano or Tensorflow format. This setting can be specified in 2 ways -

\n\n
    \n
  1. specify 'tf' or 'th' in ~/.keras/keras.json like so - image_dim_ordering: 'th'. Note: this is a json file.
  2. \n
  3. or specify the image_dim_ordering in your model like so: model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering=\"th\"))
  4. \n
\n\n

Update: Apr 2020 Keras 2.2.5 link seems to have an updated API where dim_ordering is changed to data_format so:

\n\n

keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format='channels_first') to get NCHW or use channels_last to get NHWC

\n\n

Appendix: image_dim_ordering in 'th' mode the channels dimension (the depth) is at index 1 (e.g. 3, 256, 256). In 'tf' mode is it at index 3 (e.g. 256, 256, 3). Quoting @naoko from comments.

\n", + "system": "" + }, + { + "instruction": "Tensorflow Documentation", + "input": "", + "output": "

Do not Google about Tensorflow documentation, use the TensorFlow Python reference documentation and ctrl + f

\n", + "system": "" + }, + { + "instruction": "How to suppress verbose Tensorflow logging?", + "input": "", + "output": "

2.0 Update (10/8/19)\nSetting TF_CPP_MIN_LOG_LEVEL should still work (see below in v0.12+ update), but there is currently an issue open (see issue #31870). If setting TF_CPP_MIN_LOG_LEVEL does not work for you (again, see below), try doing the following to set the log level:

\n\n
import tensorflow as tf\ntf.get_logger().setLevel('INFO')\n
\n\n

In addition, please see the documentation on tf.autograph.set_verbosity which sets the verbosity of autograph log messages - for example:

\n\n
# Can also be set using the AUTOGRAPH_VERBOSITY environment variable\ntf.autograph.set_verbosity(1)\n
\n\n

v0.12+ Update (5/20/17), Working through TF 2.0+:

\n\n

In TensorFlow 0.12+, per this issue, you can now control logging via the environmental variable called TF_CPP_MIN_LOG_LEVEL; it defaults to 0 (all logs shown) but can be set to one of the following values under the Level column.

\n\n
  Level | Level for Humans | Level Description                  \n -------|------------------|------------------------------------ \n  0     | DEBUG            | [Default] Print all messages       \n  1     | INFO             | Filter out INFO messages           \n  2     | WARNING          | Filter out INFO & WARNING messages \n  3     | ERROR            | Filter out all messages      \n
\n\n

See the following generic OS example using Python:

\n\n
import os\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'  # or any {'0', '1', '2'}\nimport tensorflow as tf\n
\n\n

To be thorough, you call also set the level for the Python tf_logging module, which is used in e.g. summary ops, tensorboard, various estimators, etc.

\n\n
# append to lines above\ntf.logging.set_verbosity(tf.logging.ERROR)  # or any {DEBUG, INFO, WARN, ERROR, FATAL}\n
\n\n

For 1.14 you will receive warnings if you do not change to use the v1 API as follows:

\n\n
# append to lines above\ntf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)  # or any {DEBUG, INFO, WARN, ERROR, FATAL}\n
\n\n


\nFor Prior Versions of TensorFlow or TF-Learn Logging (v0.11.x or lower):

\n\n

View the page below for information on TensorFlow logging; with the new update, you're able to set the logging verbosity to either DEBUG, INFO, WARN, ERROR, or FATAL. For example:

\n\n
tf.logging.set_verbosity(tf.logging.ERROR)\n
\n\n

The page additionally goes over monitors which can be used with TF-Learn models. Here is the page.

\n\n

This doesn't block all logging, though (only TF-Learn). I have two solutions; one is a 'technically correct' solution (Linux) and the other involves rebuilding TensorFlow.

\n\n
script -c 'python [FILENAME].py' | grep -v 'I tensorflow/'\n
\n\n

For the other, please see this answer which involves modifying source and rebuilding TensorFlow.

\n", + "system": "" + }, + { + "instruction": "Confused by the behavior of `tf.cond`", + "input": "", + "output": "

TL;DR: If you want tf.cond() to perform a side effect (like an assignment) in one of the branches, you must create the op that performs the side effect inside the function that you pass to tf.cond().

\n

The behavior of tf.cond() is a little unintuitive. Because execution in a TensorFlow graph flows forward through the graph, all operations that you refer to in either branch must execute before the conditional is evaluated. This means that both the true and the false branches receive a control dependency on the tf.assign() op, and so y always gets set to 2, even if pred is False.

\n

The solution is to create the tf.assign() op inside the function that defines the true branch. For example, you could structure your code as follows:

\n
pred = tf.placeholder(tf.bool, shape=[])\nx = tf.Variable([1])\ndef update_x_2():\n  with tf.control_dependencies([tf.assign(x, [2])]):\n    return tf.identity(x)\ny = tf.cond(pred, update_x_2, lambda: tf.identity(x))\nwith tf.Session() as session:\n  session.run(tf.initialize_all_variables())\n  print(y.eval(feed_dict={pred: False}))  # ==> [1]\n  print(y.eval(feed_dict={pred: True}))   # ==> [2]\n
\n", + "system": "" + }, + { + "instruction": "Could not load dynamic library 'libcudart.so.11.0';", + "input": "", + "output": "

Firstly:\nCan you find out where the "libcudart.so.11.0" is\nIf you lost it at error stack, you can replace the "libcudart.so.11.0" by your word in below:

\n
sudo find / -name 'libcudart.so.11.0'\n
\n

Outputs in my system. This result shows where the "libcudart.so.11.0" is in the system:

\n
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudart.so.11.0\n
\n

If the result shows nothing, please make sure you have install cuda or other staff that must install in your system.

\n

Second, add the path to environment file.

\n
# edit /etc/profile\nsudo vim /etc/profile\n# append path to "LD_LIBRARY_PATH" in profile file\nexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.1/targets/x86_64-linux/lib\n# make environment file work\nsource /etc/profile\n
\n

You may also refer to this link

\n

Third thing you may try is:

\n
conda install cudatoolkit\n
\n", + "system": "" + }, + { + "instruction": "Could not load dynamic library 'libcublas.so.10'; dlerror: libcublas.so.10: cannot open shared object file: No such file or directory;", + "input": "", + "output": "

On Ubuntu 20.04, you can simply install NVIDIAs cuda toolkit cuda:

\n
sudo apt-get update\nsudo apt install nvidia-cuda-toolkit\n
\n

There are also install advices for Windows.

\n

The packge is around 1GB and it took a while to install... Some minutes later you need to export PATH variables so that it can be found:

\n
    \n
  1. Find Shared Object
  2. \n
\n
sudo find / -name 'libcudart.so*'\n\n/usr/lib/x86_64-linux-gnu/libcudart.so.10.1\n/usr/lib/x86_64-linux-gnu/libcudart.so\n
\n
    \n
  1. Add the folder to path, so that python finds it
  2. \n
\n
export PATH=/usr/lib/x86_64-linux-gnu${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}\n
\n
    \n
  1. Permissions
  2. \n
\n
sudo chmod a+r /usr/lib/x86_64-linux-gnu/libcuda*\n
\n

Helped me

\n", + "system": "" + }, + { + "instruction": "How does TensorFlow SparseCategoricalCrossentropy work?", + "input": "", + "output": "

SparseCategoricalCrossentropy and CategoricalCrossentropy both compute categorical cross-entropy. The only difference is in how the targets/labels should be encoded.

\n\n

When using SparseCategoricalCrossentropy the targets are represented by the index of the category (starting from 0). Your outputs have shape 4x2, which means you have two categories. Therefore, the targets should be a 4 dimensional vector with entries that are either 0 or 1. For example:

\n\n
scce = tf.keras.losses.SparseCategoricalCrossentropy();\nLoss = scce(\n  tf.constant([ 0,    0,    0,    1   ], tf.float32),\n  tf.constant([[1,2],[3,4],[5,6],[7,8]], tf.float32))\n
\n\n

This in contrast to CategoricalCrossentropy where the labels should be one-hot encoded:

\n\n
cce = tf.keras.losses.CategoricalCrossentropy();\nLoss = cce(\n  tf.constant([ [1,0]    [1,0],    [1, 0],   [0, 1]   ], tf.float32),\n  tf.constant([[1,2],[3,4],[5,6],[7,8]], tf.float32))\n
\n\n

SparseCategoricalCrossentropy is more efficient when you have a lot of categories.

\n", + "system": "" + }, + { + "instruction": "Why does keras model predict slower after compile?", + "input": "", + "output": "

UPDATE - 1/15/2020: the current best practice for small batch sizes should be to feed inputs to the model directly - i.e. preds = model(x), and if layers behave differently at train / inference, model(x, training=False). Per latest commit, this is now documented.

\n\n

I haven't benchmarked these, but per the Git discussion, it's also worth trying predict_on_batch() - especially with improvements in TF 2.1.

\n\n
\n\n

ULTIMATE CULPRIT: self._experimental_run_tf_function = True. It's experimental. But it's not actually bad.

\n\n

To any TensorFlow devs reading: clean up your code. It's a mess. And it violates important coding practices, such as one function does one thing; _process_inputs does a lot more than \"process inputs\", same for _standardize_user_data. \"I'm not paid enough\" - but you do pay, in extra time spent understanding your own stuff, and in users filling your Issues page with bugs easier resolved with a clearer code.

\n\n
\n\n

SUMMARY: it's only a little slower with compile().

\n\n

compile() sets an internal flag which assigns a different prediction function to predict. This function constructs a new graph upon each call, slowing it down relative to uncompiled. However, the difference is only pronounced when train time is much shorter than data processing time. If we increase the model size to at least mid-sized, the two become equal. See code at the bottom.

\n\n

This slight increase in data processing time is more than compensated by amplified graph capability. Since it's more efficient to keep only one model graph around, the one pre-compile is discarded. Nonetheless: if your model is small relative to data, you are better off without compile() for model inference. See my other answer for a workaround.

\n\n
\n\n

WHAT SHOULD I DO?

\n\n

Compare model performance compiled vs uncompiled as I have in code at the bottom.

\n\n\n\n

Yes, both are possible, and it will depend on (1) data size; (2) model size; (3) hardware. Code at the bottom actually shows compiled model being faster, but 10 iterations is a small sample. See \"workarounds\" in my other answer for the \"how-to\".

\n\n
\n\n

DETAILS:

\n\n

This took a while to debug, but was fun. Below I describe the key culprits I discovered, cite some relevant documentation, and show profiler results that led to the ultimate bottleneck.

\n\n

(FLAG == self.experimental_run_tf_function, for brevity)

\n\n
    \n
  1. Model by default instantiates with FLAG=False. compile() sets it to True.
  2. \n
  3. predict() involves acquiring the prediction function, func = self._select_training_loop(x)
  4. \n
  5. Without any special kwargs passed to predict and compile, all other flags are such that:\n\n
  6. \n
  7. From source code docstring, (A) is heavily graph-reliant, uses more distribution strategy, and ops are prone to creating & destroying graph elements, which \"may\" (do) impact performance.
  8. \n
\n\n

True culprit: _process_inputs(), accounting for 81% of runtime. Its major component? _create_graph_function(), 72% of runtime. This method does not even exist for (B). Using a mid-sized model, however, _process_inputs comprises less than 1% of runtime. Code at bottom, and profiling results follow.

\n\n
\n\n

DATA PROCESSORS:

\n\n

(A): <class 'tensorflow.python.keras.engine.data_adapter.TensorLikeDataAdapter'>, used in _process_inputs() . Relevant source code

\n\n

(B): numpy.ndarray, returned by convert_eager_tensors_to_numpy. Relevant source code, and here

\n\n
\n\n

MODEL EXECUTION FUNCTION (e.g. predict)

\n\n

(A): distribution function, and here

\n\n

(B): distribution function (different), and here

\n\n
\n\n

PROFILER: results for code in my other answer, \"tiny model\", and in this answer, \"medium model\":

\n\n

Tiny model: 1000 iterations, compile()

\n\n

\n\n

Tiny model: 1000 iterations, no compile()

\n\n

\n\n

Medium model: 10 iterations

\n\n

\n\n
\n\n

DOCUMENTATION (indirectly) on effects of compile(): source

\n\n
\n

Unlike other TensorFlow operations, we don't convert python\n numerical inputs to tensors. Moreover, a new graph is generated for each\n distinct python numerical value, for example calling g(2) and g(3) will\n generate two new graphs

\n \n

function instantiates a separate graph for every unique set of input\n shapes and datatypes. For example, the following code snippet will result\n in three distinct graphs being traced, as each input has a different\n shape

\n \n

A single tf.function object might need to map to multiple computation graphs\n under the hood. This should be visible only as performance (tracing graphs has\n a nonzero computational and memory cost) but should not affect the correctness\n of the program

\n
\n\n
\n\n

COUNTEREXAMPLE:

\n\n
from tensorflow.keras.layers import Input, Dense, LSTM, Bidirectional, Conv1D\nfrom tensorflow.keras.layers import Flatten, Dropout\nfrom tensorflow.keras.models import Model\nimport numpy as np\nfrom time import time\n\ndef timeit(func, arg, iterations):\n    t0 = time()\n    for _ in range(iterations):\n        func(arg)\n    print(\"%.4f sec\" % (time() - t0))\n\nbatch_size = 32\nbatch_shape = (batch_size, 400, 16)\nipt   = Input(batch_shape=batch_shape)\nx     = Bidirectional(LSTM(512, activation='relu', return_sequences=True))(ipt)\nx     = LSTM(512, activation='relu', return_sequences=True)(ipt)\nx     = Conv1D(128, 400, 1, padding='same')(x)\nx     = Flatten()(x)\nx     = Dense(256, activation='relu')(x)\nx     = Dropout(0.5)(x)\nx     = Dense(128, activation='relu')(x)\nx     = Dense(64,  activation='relu')(x)\nout   = Dense(1,  activation='sigmoid')(x)\nmodel = Model(ipt, out)\n\nX = np.random.randn(*batch_shape)\ntimeit(model.predict, X, 10)\nmodel.compile('adam', loss='binary_crossentropy')\ntimeit(model.predict, X, 10)\n
\n\n

Outputs:

\n\n
34.8542 sec\n34.7435 sec\n
\n", + "system": "" + }, + { + "instruction": "How to understand masked multi-head attention in transformer", + "input": "", + "output": "

I had the very same question after reading the Transformer paper. I found no complete and detailed answer to the question in the Internet so I'll try to explain my understanding of Masked Multi-Head Attention.

\n

The short answer is - we need masking to make the training parallel. And the parallelization is good as it allows the model to train faster. I've also made a video with explanation of this mechanism.

\n

Here's an example explaining the idea. Let's say we train to translate "I love you" to German. The encoder works in parallel mode - it can produce vector representation of the input sequence ("I love you") within a constant number of steps (i.e. the number of steps doesn't depend on the length of the input sequence).

\n

Let's say the encoder produces the numbers 11, 12, 13 as the vector representations of the input sequence. In reality these vectors will be much longer but for simplicity we use the short ones. Also for simplicity we ignore the service tokens, like - beginning of the sequence, - end of the sequence and others.

\n

During the training we know that the translation should be "Ich liebe dich" (we always know the expected output during the training). Let's say the expected vector representations of the "Ich liebe dich" words are 21, 22, 23.

\n

If we make the decoder training in sequential mode, it'll look like the training of the Recurrent Neural Network. The following sequential steps will be performed:

\n\n

This means we'll need to make 3 sequential operations (in general case - a sequential operation per each input). Also we'll have an accumulating error on each next iteration. Also we don't use attention as we only look to a single previous output.

\n

As we actually know the expected outputs we can adjust the process and make it parallel. There's no need to wait for the previous step output.

\n\n

This algorithm can be executed in parallel and also it doesn't accumulate the error. And this algorithm uses attention (i.e. looks to all previous inputs) thus has more information about the context to consider while making the prediction.

\n

And here is where we need the masking. The training algorithm knows the entire expected output (21, 22, 23). It hides (masks) a part of this known output sequence for each of the parallel operations.

\n\n

Masking itself is implemented as the following (from the original paper):

\n
\n

We implement this inside of scaled dot-product attention by masking\nout (setting to \u2212\u221e) all values in the input of the softmax which\ncorrespond to illegal connections

\n
\n

Note: during the inference (not training) the decoder works in the sequential (not parallel) mode as it doesn't know the output sequence initially. But it's different from RNN approach as Transformer inference still uses self-attention and looks at all previous outputs (but not only the very previous one).

\n

Note 2: I've seen in some materials that masking can be used differently for non-translation applications. For example, for language modeling the masking can be used to hide some words from the input sentence and the model will try to predict them during the training using other, non-masked words (i.e. learn to understand the context).

\n", + "system": "" + }, + { + "instruction": "How to clear GPU memory WITHOUT restarting runtime in Google Colaboratory (Tensorflow)", + "input": "", + "output": "

in tensorflow you van use numba library:

\n
!pip install numba\n\nfrom numba import cuda \ndevice = cuda.get_current_device()\ndevice.reset()\n
\n", + "system": "" + }, + { + "instruction": "ValueError: Can't convert non-rectangular Python sequence to Tensor", + "input": "", + "output": "

I'm not sure whether they exist in TensorFlow 1 but TensorFlow 2.0 supports RaggedTensors, which the documentation describes as "... the TensorFlow equivalent of nested variable-length lists."

\n

I think it would be trivial to convert your data to RaggedTensors. It might even be as easy as:

\n
data_tensor = tf.ragged.constant(data)\n
\n

Example:

\n
>>> a = tf.ragged.constant([[1],[2,3]])\n>>> a\n<tf.RaggedTensor [[1], [2, 3]]>\n
\n", + "system": "" + }, + { + "instruction": "model.summary() can't print output shape while using subclass model", + "input": "", + "output": "

I have used this method to solve this problem, I don't know if there is an easier way.

\n\n
class subclass(Model):\n    def __init__(self):\n        ...\n    def call(self, x):\n        ...\n\n    def model(self):\n        x = Input(shape=(24, 24, 3))\n        return Model(inputs=[x], outputs=self.call(x))\n\n\n\nif __name__ == '__main__':\n    sub = subclass()\n    sub.model().summary()\n
\n", + "system": "" + }, + { + "instruction": "Is there a version of TensorFlow not compiled for AVX instructions?", + "input": "", + "output": "

A best practices approach suggested by peter-cordes is to see what gcc is going to make of your 'what capabilities your cpu has' by issuing the following:

\n\n
gcc -O3 -fverbose-asm -march=native -xc /dev/null -S -o- | less\n
\n\n

This command will provide information (all) about your cpu capabilities from the view of gcc, whom is going to do the build, so gcc's view matters.

\n\n

When does this come up? When a program offers to tailor itself to your cpu. Dang. What do I know about my cpu. Well, the above line will tell you all you need to know.

\n\n

That said, generally, people/developers that are promoting cpu based capabilities will state or suggest a list of things that go faster/better/stronger if your cpu has *. And the above will give you *. Read carefully what you see. If you don't have it, you don't want it, i.e.

\n\n
-mno-avx(whatever you don't want;in my case it was avx)\n
\n\n

A good overview of install of CPU capable on older cpu(s) is provided by\nMikael Fernandez Simalango for Ubuntu 16.04 LTS. It assumes a python2.7 environ but easily translates to python3. The heart of the matter is extracting which cpu instruction extensions are available on your particular cpu that will be used in addition to -march=native via /proc/cpuinfo, (but note, it appears limited to what flags it accepts, so may be better to actually read through the instruction above and reflect)

\n\n
grep flags -m1 /proc/cpuinfo | cut -d \":\" -f 2 | tr '[:upper:]' \n'[:lower:]' | { read FLAGS; OPT=\"-march=native\"; for flag in $FLAGS; \ndo case \"$flag\" in \"sse4_1\" | \"sse4_2\" | \"ssse3\" | \"fma\" | \"cx16\" | \n\"popcnt\" | \"avx\" | \"avx2\") OPT+=\" -m$flag\";; esac; done; \nMODOPT=${OPT//_/\\.}; echo \"$MODOPT\"; }\n
\n\n

Running this on my old box output:

\n\n
-march=native -mssse3 -mcx16 -msse4.1 -msse4.2 -mpopcnt\n
\n\n

It gets part way there. What is not clear is how to say, 'not this' and 'not that', which for old CPUs would be, most likely, -mno-avx.

\n\n

For an old cpu, which -march matters and Nephanth very usefully addresses this:

\n\n
gcc -march=native -Q --help=target|grep march\n
\n\n

produces

\n\n
-march=                             westmere\n
\n\n

which means my response to the ./compile question should be or might be, and note the quotes 'westmere' which is also in the gcc docs so the ' ' must be there for a reason

\n\n
-march='westmere' -mssse3 -mcx16 -msse4.1 -msse4.2 -mpopcnt -mno-avx\n
\n\n

but this is probably much better (see discussion below):

\n\n
-march=native -mssse3 -mcx16 -msse4.1 -msse4.2 -mpopcnt -mno-avx\n
\n\n

The -mno-avx is an option for gcc, and results, after many hours, in

\n\n
Python 3.5.2 (default, Nov 23 2017, 16:37:01) \n[GCC 5.4.0 20160609] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more \ninformation.\n>>> import tensorflow as tf\n>>> \n>>> tf.__version__\n'2.0.0-alpha0'\n
\n\n

which looks like success.

\n\n

Restated:\nIn either order, find out what instructions are (or not) supported by your cpu, and state those explicitly.

\n", + "system": "" + }, + { + "instruction": "How to import keras.engine.topology in Tensorflow?", + "input": "", + "output": "

You can import Layer and InputSpec from TensorFlow as follows:

\n\n
from tensorflow.python.keras.layers import Layer, InputSpec\n
\n\n

UPDATE: 30/10/2019

\n\n
from tensorflow.keras.layers import Layer, InputSpec\n
\n", + "system": "" + }, + { + "instruction": "How can I test a .tflite model to prove that it behaves as the original model using the same Test Data?", + "input": "", + "output": "

You may use TensorFlow Lite Python interpreter to test your tflite model.

\n\n

It allows you to feed input data in python shell and read the output directly like you are just using a normal tensorflow model.

\n\n

I have answered this question here.

\n\n

And you can read this TensorFlow lite official guide for detailed information.

\n\n

You can also use Netron to visualize your model. It allows you to load your .tflite file directly and inspect your model architecture and model weights.

\n", + "system": "" + }, + { + "instruction": "shuffle in the model.fit of keras", + "input": "", + "output": "

It will shuffle your entire dataset (x, y and sample_weight together) first and then make batches according to the batch_size argument you passed to fit.

\n

Edit

\n

As @yuk pointed out in the comment, the code has been changed significantly since 2018. The documentation for the shuffle parameter now seems more clear on its own. You can choose to shuffle the entire training data or just shuffle the batch:

\n
        shuffle: Boolean (whether to shuffle the training data\n            before each epoch) or str (for 'batch'). This argument is ignored\n            when `x` is a generator. 'batch' is a special option for dealing\n            with the limitations of HDF5 data; it shuffles in batch-sized\n            chunks. Has no effect when `steps_per_epoch` is not `None`.\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow object detection config files documentation", + "input": "", + "output": "

As mentioned in the configuration documentation, configuration files are just Protocol Buffers objects described in the .proto files under research/object_detection/protos. The top level object is a TrainEvalPipelineConfig defined in pipeline.proto, and different files describe each of the elements. For example, data_augmentation_options are PreprocessingStep objects, defined in preprocessor.proto (which in turn can include a range of other possible objects for different preprocessing tasks). The meaning of each object and field may or may not be obvious or well-documented, but you can always refer to the source code to see exactly how each value is being used (for example, check preprocessor.py to understand how data augmentation is done).

\n", + "system": "" + }, + { + "instruction": "Why do we use tf.name_scope()", + "input": "", + "output": "

They are not the same thing.

\n\n
import tensorflow as tf\nc1 = tf.constant(42)\nwith tf.name_scope('s1'):\n    c2 = tf.constant(42)\nprint(c1.name)\nprint(c2.name)\n
\n\n

prints

\n\n
Const:0\ns1/Const:0\n
\n\n

So as the name suggests, the scope functions create a scope for the names of the ops you create inside. This has an effect on how you refer to tensors, on reuse, on how the graph shows in TensorBoard and so on.

\n", + "system": "" + }, + { + "instruction": "Tensorflow VarLenFeature vs FixedLenFeature", + "input": "", + "output": "

You can load images probably beacause you saved them using feature type tf.train.BytesList() and whole image data is one big byte value inside a list.

\n

If I'm right you're using tf.decode_raw to get the data out of the image you load from TFRecord.

\n

Regarding example use cases:\nI use VarLenFeature for saving datasets for object detection task:\nThere's variable amount of bounding boxes per image (equal to object in image) therefore I need another feature objects_number to track amount of objects (and bboxes).\nEach bounding box itself is a list of 4 float coordinates

\n

I'm using following code to load it:

\n
features = tf.parse_single_example(\n    serialized_example,\n    features={\n        # We know the length of both fields. If not the\n        # tf.VarLenFeature could be used\n        'height': tf.FixedLenFeature([], tf.int64),\n        'width': tf.FixedLenFeature([], tf.int64),\n        'depth': tf.FixedLenFeature([], tf.int64),\n        # Label part\n        'objects_number': tf.FixedLenFeature([], tf.int64),\n        'bboxes': tf.VarLenFeature(tf.float32),\n        'labels': tf.VarLenFeature(tf.int64),\n        # Dense data\n        'image_raw': tf.FixedLenFeature([],tf.string)\n\n    })\n\n# Get metadata\nobjects_number = tf.cast(features['objects_number'], tf.int32)\nheight = tf.cast(features['height'], tf.int32)\nwidth = tf.cast(features['width'], tf.int32)\ndepth = tf.cast(features['depth'], tf.int32)\n\n# Actual data\nimage_shape = tf.parallel_stack([height, width, depth])\nbboxes_shape = tf.parallel_stack([objects_number, 4])\n\n# BBOX data is actually dense convert it to dense tensor\nbboxes = tf.sparse_tensor_to_dense(features['bboxes'], default_value=0)\n\n# Since information about shape is lost reshape it\nbboxes = tf.reshape(bboxes, bboxes_shape)\nimage = tf.decode_raw(features['image_raw'], tf.uint8)\nimage = tf.reshape(image, image_shape)\n
\n

Notice that "image_raw" is fixed length Feature (has one element) and holds values of type "bytes", however a value of "bytes" type can itself have variable size (its a string of bytes, and can have many symbols within it).\nSo "image_raw" is a list with ONE element of type "bytes", which can be super big.

\n

To further elaborate on how it works:\nFeatures are lists of values, those values have specific "type".

\n

Datatypes for features are subset of data types for tensors, you have:

\n\n

You can check here tensors data types.

\n

So you can store variable length data without VarLenFeatures at all (actually well you do it), but first you would need to convert it into bytes/string feature, and then decode it.\nAnd this is most common method.

\n", + "system": "" + }, + { + "instruction": "Tensorflow Slim: TypeError: Expected int32, got list containing Tensors of type '_Message' instead", + "input": "", + "output": "

I got the same problem when using the 1.0 released and I could make it work without having to roll back on a previous version.

\n\n

The problem is caused by change in the api. That discussion helped me to find the solution: Google group > \nRecent API Changes in TensorFlow

\n\n

You just have to update all the line with tf.concat

\n\n

for example

\n\n
net = tf.concat(3, [branch_0, branch_1, branch_2, branch_3])\n
\n\n

should be changed to

\n\n
net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3)\n
\n\n

Note:

\n\n

I was able to use the models without problem. But I still got error afterward when wanting to load the pretrained weight.\nSeems that the slim module got several changed since they made the checkpoint file. The graph created by the code and the one present in the checkpoint file were different.

\n\n

Note2:

\n\n

I was able to use the pretrain weights for inception_resnet_v2 by adding to all conv2d layer biases_initializer=None

\n", + "system": "" + }, + { + "instruction": "how to implement tensorflow's next_batch for own data", + "input": "", + "output": "

The link you posted says: \"we get a \"batch\" of one hundred random data points from our training set\". In my example I use a global function (not a method like in your example) so there will be a difference in syntax.

\n\n

In my function you'll need to pass the number of samples wanted and the data array.

\n\n

Here is the correct code, which ensures samples have correct labels:

\n\n
import numpy as np\n\ndef next_batch(num, data, labels):\n    '''\n    Return a total of `num` random samples and labels. \n    '''\n    idx = np.arange(0 , len(data))\n    np.random.shuffle(idx)\n    idx = idx[:num]\n    data_shuffle = [data[ i] for i in idx]\n    labels_shuffle = [labels[ i] for i in idx]\n\n    return np.asarray(data_shuffle), np.asarray(labels_shuffle)\n\nXtr, Ytr = np.arange(0, 10), np.arange(0, 100).reshape(10, 10)\nprint(Xtr)\nprint(Ytr)\n\nXtr, Ytr = next_batch(5, Xtr, Ytr)\nprint('\\n5 random samples')\nprint(Xtr)\nprint(Ytr)\n
\n\n

And a demo run:

\n\n
[0 1 2 3 4 5 6 7 8 9]\n[[ 0  1  2  3  4  5  6  7  8  9]\n [10 11 12 13 14 15 16 17 18 19]\n [20 21 22 23 24 25 26 27 28 29]\n [30 31 32 33 34 35 36 37 38 39]\n [40 41 42 43 44 45 46 47 48 49]\n [50 51 52 53 54 55 56 57 58 59]\n [60 61 62 63 64 65 66 67 68 69]\n [70 71 72 73 74 75 76 77 78 79]\n [80 81 82 83 84 85 86 87 88 89]\n [90 91 92 93 94 95 96 97 98 99]]\n\n5 random samples\n[9 1 5 6 7]\n[[90 91 92 93 94 95 96 97 98 99]\n [10 11 12 13 14 15 16 17 18 19]\n [50 51 52 53 54 55 56 57 58 59]\n [60 61 62 63 64 65 66 67 68 69]\n [70 71 72 73 74 75 76 77 78 79]]\n
\n", + "system": "" + }, + { + "instruction": "How to redirect TensorFlow logging to a file?", + "input": "", + "output": "
import logging\n\n# get TF logger\nlog = logging.getLogger('tensorflow')\nlog.setLevel(logging.DEBUG)\n\n# create formatter and add it to the handlers\nformatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')\n\n# create file handler which logs even debug messages\nfh = logging.FileHandler('tensorflow.log')\nfh.setLevel(logging.DEBUG)\nfh.setFormatter(formatter)\nlog.addHandler(fh)\n
\n\n

My solution is inspired by this thread.

\n", + "system": "" + }, + { + "instruction": "What's the difference between tensorflow dynamic_rnn and rnn?", + "input": "", + "output": "

From RNNs in Tensorflow, a Practical Guide and Undocumented Features by Denny Britz, published in August 21, 2016.

\n\n
\n

tf.nn.rnn creates an unrolled graph for a fixed RNN length. That\n means, if you call tf.nn.rnn with inputs having 200 time steps you are\n creating a static graph with 200 RNN steps. First, graph creation is\n slow. Second, you\u2019re unable to pass in longer sequences (> 200) than\n you\u2019ve originally specified.

\n \n

tf.nn.dynamic_rnn solves this. It uses a tf.While loop to dynamically\n construct the graph when it is executed. That means graph creation is\n faster and you can feed batches of variable size.

\n
\n", + "system": "" + }, + { + "instruction": "tensorflow deep neural network for regression always predict same results in one batch", + "input": "", + "output": "

Short answer:

\n\n

Transpose your pred vector using tf.transpose(pred).

\n\n

Longer answer:

\n\n

The problem is that pred (the predictions) and y (the labels) are not of the same shape: one is a row vector and the other a column vector. Apparently when you apply an element-wise operation on them, you'll get a matrix, which is not what you want.

\n\n

The solution is to transpose the prediction vector using tf.transpose() to get a proper vector and thus a proper loss function. Actually, if you set the batch size to 1 in your example you'll see that it works even without the fix, because transposing a 1x1 vector is a no-op.

\n\n

I applied this fix to your example code and observed the following behaviour. Before the fix:

\n\n
Epoch: 0245 cost= 84.743440580\n[*]----------------------------\nlabel value: 23 estimated value: [ 27.47437096]\nlabel value: 50 estimated value: [ 24.71126747]\nlabel value: 22 estimated value: [ 23.87785912]\n
\n\n

And after the fix at the same point in time:

\n\n
Epoch: 0245 cost= 4.181439120\n[*]----------------------------\nlabel value: 23 estimated value: [ 21.64333534]\nlabel value: 50 estimated value: [ 48.76105118]\nlabel value: 22 estimated value: [ 24.27996063]\n
\n\n

You'll see that the cost is much lower and that it actually learned the value 50 properly. You'll have to do some fine-tuning on the learning rate and such to improve your results of course.

\n", + "system": "" + }, + { + "instruction": "Does TensorFlow have cross validation implemented?", + "input": "", + "output": "

As already discussed, tensorflow doesn't provide its own way to cross-validate the model. The recommended way is to use KFold. It's a bit tedious, but doable. Here's a complete example of cross-validating MNIST model with tensorflow and KFold:

\n\n
from sklearn.model_selection import KFold\nimport tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\n\n# Parameters\nlearning_rate = 0.01\nbatch_size = 500\n\n# TF graph\nx = tf.placeholder(tf.float32, [None, 784])\ny = tf.placeholder(tf.float32, [None, 10])\nW = tf.Variable(tf.zeros([784, 10]))\nb = tf.Variable(tf.zeros([10]))\npred = tf.nn.softmax(tf.matmul(x, W) + b)\ncost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)\ncorrect_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\ninit = tf.global_variables_initializer()\n\nmnist = input_data.read_data_sets(\"data/mnist-tf\", one_hot=True)\ntrain_x_all = mnist.train.images\ntrain_y_all = mnist.train.labels\ntest_x = mnist.test.images\ntest_y = mnist.test.labels\n\ndef run_train(session, train_x, train_y):\n  print \"\\nStart training\"\n  session.run(init)\n  for epoch in range(10):\n    total_batch = int(train_x.shape[0] / batch_size)\n    for i in range(total_batch):\n      batch_x = train_x[i*batch_size:(i+1)*batch_size]\n      batch_y = train_y[i*batch_size:(i+1)*batch_size]\n      _, c = session.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})\n      if i % 50 == 0:\n        print \"Epoch #%d step=%d cost=%f\" % (epoch, i, c)\n\ndef cross_validate(session, split_size=5):\n  results = []\n  kf = KFold(n_splits=split_size)\n  for train_idx, val_idx in kf.split(train_x_all, train_y_all):\n    train_x = train_x_all[train_idx]\n    train_y = train_y_all[train_idx]\n    val_x = train_x_all[val_idx]\n    val_y = train_y_all[val_idx]\n    run_train(session, train_x, train_y)\n    results.append(session.run(accuracy, feed_dict={x: val_x, y: val_y}))\n  return results\n\nwith tf.Session() as session:\n  result = cross_validate(session)\n  print \"Cross-validation result: %s\" % result\n  print \"Test accuracy: %f\" % session.run(accuracy, feed_dict={x: test_x, y: test_y})\n
\n", + "system": "" + }, + { + "instruction": "What does opt.apply_gradients() do in TensorFlow?", + "input": "", + "output": "

The update rule that the apply_gradients method actually applies depends on the specific optimizer. Take a look at the implementation of apply_gradients in the tf.train.Optimizer class here. It relies on the derived classes implementing the update rule in the methods _apply_dense and _apply_spares. The update rule you are referring to is implemented by the GradientDescentOptimizer.

\n\n

Regarding your desired positive additive update: If what you are calling opt is an instantiation of GradientDescentOptimizer, then you could indeed achieve what you want to do by

\n\n
grads_and_vars = opt.compute_gradients(E, [v])\neta = opt._learning_rate\nmy_grads_and_vars = [(g-(1/eta)*p, v) for g, v in grads_and_vars]\nopt.apply_gradients(my_grads_and_vars)\n
\n\n

The more elegant way to do this is probably to write a new optimizer (inheriting from tf.train.Optimizer) that implements your desired update rule directly.

\n", + "system": "" + }, + { + "instruction": "ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization'", + "input": "", + "output": "

You should import BatchNormalization in following way:

\n
from tensorflow.keras.layers import BatchNormalization\n
\n", + "system": "" + }, + { + "instruction": "AttributeError: module 'tensorflow.python.keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'", + "input": "", + "output": "

change from keras import models to from tensorflow.keras import models
\nthis solved the problem for me with tensorflow 2.5.0

\n", + "system": "" + }, + { + "instruction": "Unexpected keyword argument 'ragged' in Keras", + "input": "", + "output": "

So I tried link above which you have mentioned teachable machine
\nAs it turns out model you have exported is from tensorflow.keras and not directly from keras API. These two are different. So while loading it might be using tf.ragged tensors that might not be compatible with keras API.
\n
Soulution to your issue:

\nDon't import keras directly as your model is saved with Tensorflow's keras high level api. Change all your imports to tensorflow.keras\n

Change:

\n\n
from keras.preprocessing.image import img_to_array\nfrom keras.models import load_model\n
\n\n

to this:

\n\n
from tensorflow.keras.preprocessing.image import img_to_array\nfrom tensorflow.keras.models import load_model\n
\n\n

It will solve your issue.

\n\n

EDIT :
\nAll of your imports, either should be from Keras or tensorflow.keras. Although being same API few things are different which creates these kind of issues. Also for tensorflow backend tf.keras is preferred, because Keras 2.3.0 is last major release which will support backends other than tensorflow.

\n\n
\n

This release brings the API in sync with the tf.keras API as of TensorFlow 2.0. However note that it does not support most TensorFlow 2.0 features, in particular eager execution. If you need these features, use tf.keras.\n This is also the last major release of multi-backend Keras. Going forward, we recommend that users consider switching their Keras code to tf.keras in TensorFlow 2.0.

\n
\n", + "system": "" + }, + { + "instruction": "How to input a list of lists with different sizes in tf.data.Dataset", + "input": "", + "output": "

You can use tf.data.Dataset.from_generator() to convert any iterable Python object (like a list of lists) into a Dataset:

\n\n
t = [[4, 2], [3, 4, 5]]\n\ndataset = tf.data.Dataset.from_generator(lambda: t, tf.int32, output_shapes=[None])\n\niterator = dataset.make_one_shot_iterator()\nnext_element = iterator.get_next()\n\nwith tf.Session() as sess:\n  print(sess.run(next_element))  # ==> '[4, 2]'\n  print(sess.run(next_element))  # ==> '[3, 4, 5]'\n
\n", + "system": "" + }, + { + "instruction": "ImportError: No module named 'keras'", + "input": "", + "output": "

Hi I have an solution try this if you are using Anaconda-Navigator

\n\n

go to Anaconda Environment and search keras package and then install.

\n\n

\"install

\n\n

\"enter

\n\n

after install just type import keras in shell its working.

\n\n

\"enter

\n", + "system": "" + }, + { + "instruction": "Loading SavedModel is a lot slower than loading a tf.train.Saver checkpoint", + "input": "", + "output": "

I am by no ways an expert in Tensorflow, but if I had to take a guess as to why this is happening, I would say that:

\n\n\n\n

Depending on the size of your graph, recreating everything that it contained might take some time.

\n\n

Concerning the second question, as @J H said, if there are no reasons for you to use one strategy over the other, and time is of the essence, then just go with the fastest one.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: Cannot interpret feed_dict key as Tensor", + "input": "", + "output": "

This worked for me

\n\n
from keras import backend as K\n
\n\n

and after predicting my data i inserted this part of code\nthen i had again loaded the model.

\n\n
K.clear_session()\n
\n\n

i faced this problem in production server,\nbut in my pc it was running fine

\n\n

...........

\n\n
from keras import backend as K\n\n#Before prediction\nK.clear_session()\n\n#After prediction\nK.clear_session()\n
\n", + "system": "" + }, + { + "instruction": "What is the equivalent of np.std() in TensorFlow?", + "input": "", + "output": "

To get the mean and variance just use tf.nn.moments.

\n\n
mean, var = tf.nn.moments(x, axes=[1])\n
\n\n

For more on tf.nn.moments params see docs

\n", + "system": "" + }, + { + "instruction": "Cannot convert a partially converted tensor in TensorFlow", + "input": "", + "output": "

You just need to feed it in as a single example but in the batched shape. So that means adding an extra dimension to the shape e.g.

\n\n
batch_size = 32 # set this to the actual size of your batch\ntf.truncated_normal((batch_size, 784), mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)\n
\n\n

This way it will \"fit\" into the placeholder.

\n\n

If you expect batch_size to change you can also use:

\n\n
tf.truncated_normal(tf.shape(input_tensor), mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)\n
\n\n

Where input_tensor could be a placeholder or just whatever tensor is going to have this noise added to it.

\n", + "system": "" + }, + { + "instruction": "How to profile TensorFlow networks?", + "input": "", + "output": "

If you want to find how much time was spent on each operation at TF, you can do this in tensorboard using runtime statistics. You will need to do something like this (check the full example in the above-mentioned link):

\n\n
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)\nrun_metadata = tf.RunMetadata()\nsess.run(<values_you_want_to_execute>, options=run_options, run_metadata=run_metadata)\nyour_writer.add_run_metadata(run_metadata, 'step%d' % i)\n
\n\n
\n\n

Better than just printing it you can see it in tensorboard:

\n\n
\n

Additionally, clicking on a node will display the exact total memory,\n compute time, and tensor output sizes.

\n
\n\n

Also now tensorflow has a debugger. Here is a tutorial of how to use it.

\n\n

\"[Example

\n", + "system": "" + }, + { + "instruction": "Run Tensorflow unit tests", + "input": "", + "output": "

The easiest way to run the TensorFlow unit tests is using Bazel, assuming you have downloaded the source from Git:

\n\n
# All tests (for C++ changes).\n$ bazel test //tensorflow/...\n\n# All Python tests (for Python front-end changes).\n$ bazel test //tensorflow/python/...\n\n# All tests (with GPU support).\n$ bazel test -c opt --config=cuda //tensorflow/...\n$ bazel test -c opt --config=cuda //tensorflow/python/...\n
\n", + "system": "" + }, + { + "instruction": "ImportError: No module named 'keras'", + "input": "", + "output": "

Hi I have an solution try this if you are using Anaconda-Navigator

\n\n

go to Anaconda Environment and search keras package and then install.

\n\n

\"install

\n\n

\"enter

\n\n

after install just type import keras in shell its working.

\n\n

\"enter

\n", + "system": "" + }, + { + "instruction": "Tensorflow GradientTape "Gradients does not exist for variables" intermittently", + "input": "", + "output": "

The solution given by Nguy\u1ec5n and gkennos will suppress the error because it would replace all None by zeros.\nHowever, it is a big issue that your gradient is null at any point in time.\nThe problem described above is certainly caused by unconnected variables (by default PyTorch will throw runtime error).

\n

The most common case of unconnected layers can be exemplify as follow:

\n
 def some_func(x):\n       x1 = x * some variables\n       x2 = x1 + some variables #x2 discontinued after here\n       x3 = x1 / some variables\n       return x3\n
\n

Now observe that x2 is unconnected, so gradient will not be propagated throw it. Carefully debug your code for unconnected variables.

\n", + "system": "" + }, + { + "instruction": "Description of TF Lite's Toco converter args for quantization aware training", + "input": "", + "output": "

You should never need to manually set the quantization stats.

\n\n

Have you tried the post-training-quantization tutorials?

\n\n

https://www.tensorflow.org/lite/performance/post_training_integer_quant

\n\n

Basically they set the quantization options:

\n\n
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]\nconverter.inference_input_type = tf.uint8\nconverter.inference_output_type = tf.uint8\n
\n\n

Then they pass a \"representative dataset\" to the converter, so that the converter can run the model a few batches to gather the necessary statistics:

\n\n
def representative_data_gen():\n  for input_value in mnist_ds.take(100):\n    yield [input_value]\n\nconverter.representative_dataset = representative_data_gen\n
\n\n

While there are options for quantized training, it's always easier to to do post-training quantization.

\n", + "system": "" + }, + { + "instruction": "Graph optimizations on a tensorflow serveable created using tf.Estimator", + "input": "", + "output": "

The way to go from a SavedModel to a servable after running tensorflow graph transforms is to use the SavedModel Builder API.

\n

First, you need to create a SavedModel Builder object and then rebuild the graph you have just transformed, using the SavedModel Builder API.

\n

Next, you need to add the assets, signatures, and other meta-data back into the model. Finally, you need to call the SavedModel Builder API's save() method, which will save the model as a servable.

\n

This servable can then be used with tensorflow serving.

\n", + "system": "" + }, + { + "instruction": "Tensorboard Error: No dashboards are active for current data set", + "input": "", + "output": "

Your issue may be related to the drive you are attempting to start tensorboard from and the drive your logdir is on. Tensorboard uses a colon to separate the optional run name and the path in the logdir flag, so your path is being interpreted as \\path\\to\\output\\folder with name C.

\n\n

This can be worked around by either starting tensorboard from the same drive as your log directory or by providing an explicit run name, e.g. logdir=mylogs:C:\\path\\to\\output\\folder

\n\n

See here for reference to the issue.

\n", + "system": "" + }, + { + "instruction": "Is there any way to get variable importance with Keras?", + "input": "", + "output": "

*Edited to include relevant code to implement permutation importance.

\n\n

I answered a similar question at Feature Importance Chart in neural network using Keras in Python. It does implement what Teque5 mentioned above, namely shuffling the variable among your sample or permutation importance using the ELI5 package.

\n\n
from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor\nimport eli5\nfrom eli5.sklearn import PermutationImportance\n\ndef base_model():\n    model = Sequential()        \n    ...\n    return model\n\nX = ...\ny = ...\n\nmy_model = KerasRegressor(build_fn=basemodel, **sk_params)    \nmy_model.fit(X,y)\n\nperm = PermutationImportance(my_model, random_state=1).fit(X,y)\neli5.show_weights(perm, feature_names = X.columns.tolist())\n
\n", + "system": "" + }, + { + "instruction": "ValueError: Tensor must be from the same graph as Tensor with Bidirectinal RNN in Tensorflow", + "input": "", + "output": "

TensorFlow stores all operations on an operational graph. This graph defines what functions output to where, and it links it all together so that it can follow the steps you have set up in the graph to produce your final output. If you try to input a Tensor or operation on one graph into a Tensor or operation on another graph it will fail. Everything must be on the same execution graph.

\n\n

Try removing with tf.Graph().as_default():

\n\n

TensorFlow provides you a default graph which is referred to if you do not specify a graph. You are probably using the default graph in one spot and a different graph in your training block.

\n\n

There does not seem to be a reason you are specifying a graph as default here and most likely you are using separate graphs on accident. If you really want to specify a graph then you probably want to pass it as a variable, not set it like this.

\n", + "system": "" + }, + { + "instruction": "ImportError: No module named 'tensorflow.python'", + "input": "", + "output": "

Uninstall tensorflow:

\n\n
pip uninstall tensorflow\n
\n\n

Then reinstall it:

\n\n
pip install tensorflow\n
\n", + "system": "" + }, + { + "instruction": "Unable to open Tensorboard in browser", + "input": "", + "output": "

Had the same problem this morning. Solved it with

\n\n
tensorboard --logdir=data/ --host localhost --port 8088\n
\n\n

Navigated the browser to http://localhost:8088

\n", + "system": "" + }, + { + "instruction": "Flatten batch in tensorflow", + "input": "", + "output": "

You can do it easily with tf.reshape() without knowing the batch size.

\n\n
x = tf.placeholder(tf.float32, shape=[None, 9,2])\nshape = x.get_shape().as_list()        # a list: [None, 9, 2]\ndim = numpy.prod(shape[1:])            # dim = prod(9,2) = 18\nx2 = tf.reshape(x, [-1, dim])           # -1 means \"all\"\n
\n\n

The -1 in the last line means the whole column no matter what the batchsize is in the runtime. You can see it in tf.reshape().

\n\n
\n\n

Update: shape = [None, 3, None]

\n\n

Thanks @kbrose. For the cases where more than 1 dimension are undefined, we can use tf.shape() with tf.reduce_prod() alternatively.

\n\n
x = tf.placeholder(tf.float32, shape=[None, 3, None])\ndim = tf.reduce_prod(tf.shape(x)[1:])\nx2 = tf.reshape(x, [-1, dim])\n
\n\n

tf.shape() returns a shape Tensor which can be evaluated in runtime. The difference between tf.get_shape() and tf.shape() can be seen in the doc.

\n\n

I also tried tf.contrib.layers.flatten() in another . It is simplest for the first case, but it can't handle the second.

\n", + "system": "" + }, + { + "instruction": "TensorFlow - numpy-like tensor indexing", + "input": "", + "output": "

You can actually do that now with tf.gather_nd. Let's say you have a matrix m like the following:

\n\n
| 1 2 3 4 |\n| 5 6 7 8 |\n
\n\n

And you want to build a matrix r of size, let's say, 3x2, built from elements of m, like this:

\n\n
| 3 6 |\n| 2 7 |\n| 5 3 |\n| 1 1 |\n
\n\n

Each element of r corresponds to a row and column of m, and you can have matrices rows and cols with these indices (zero-based, since we are programming, not doing math!):

\n\n
       | 0 1 |         | 2 1 |\nrows = | 0 1 |  cols = | 1 2 |\n       | 1 0 |         | 0 2 |\n       | 0 0 |         | 0 0 |\n
\n\n

Which you can stack into a 3-dimensional tensor like this:

\n\n
| | 0 2 | | 1 1 | |\n| | 0 1 | | 1 2 | |\n| | 1 0 | | 2 0 | |\n| | 0 0 | | 0 0 | |\n
\n\n

This way, you can get from m to r through rows and cols as follows:

\n\n
import numpy as np\nimport tensorflow as tf\n\nm = np.array([[1, 2, 3, 4], [5, 6, 7, 8]])\nrows = np.array([[0, 1], [0, 1], [1, 0], [0, 0]])\ncols = np.array([[2, 1], [1, 2], [0, 2], [0, 0]])\n\nx = tf.placeholder('float32', (None, None))\nidx1 = tf.placeholder('int32', (None, None))\nidx2 = tf.placeholder('int32', (None, None))\nresult = tf.gather_nd(x, tf.stack((idx1, idx2), -1))\n\nwith tf.Session() as sess:\n    r = sess.run(result, feed_dict={\n        x: m,\n        idx1: rows,\n        idx2: cols,\n    })\nprint(r)\n
\n\n

Output:

\n\n
[[ 3.  6.]\n [ 2.  7.]\n [ 5.  3.]\n [ 1.  1.]]\n
\n", + "system": "" + }, + { + "instruction": "How to fix AttributeError: partially initialized module 'charset_normalizer' has no attribute 'md__mypyc' (most likely due to a circular import)", + "input": "", + "output": "

Your traceback doesn't show which version of the charset-normalizer package is installed.

\n

I got a similar error when training an xgboost model using Ray. I had charset-normalizer v3.0.1 installed. Upgrading it to v3.1.0 fixed the error.

\n

Try running

\n
pip install --force-reinstall charset-normalizer==3.1.0\n
\n

or simply

\n
pip install -U --force-reinstall charset-normalizer  \n
\n

Then rerun your code and see if that does the trick!

\n", + "system": "" + }, + { + "instruction": "Tensorflow GPU Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found", + "input": "", + "output": "Step 1\n
 Move to C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\bin\n
\nStep 2\n
Rename file cusolver64_11.dll  To  cusolver64_10.dll \n
\n

\"enter

\n
 cusolver64_10.dll \n
\n

\"enter

\n", + "system": "" + }, + { + "instruction": "Why is Tensorflow not recognizing my GPU after conda install?", + "input": "", + "output": "

August 2021 Conda install may be working now, as according to @ComputerScientist in the comments below, conda install tensorflow-gpu==2.4.1 will give cudatoolkit-10.1.243 and cudnn-7.6.5

\n

The following was written in Jan 2021 and is out of date

\n

Currently conda install tensorflow-gpu installs tensorflow v2.3.0 and does NOT install the conda cudnn or cudatoolkit packages. Installing them manually (e.g. with conda install cudatoolkit=10.1) does not seem to fix the problem either.

\n

A solution is to install an earlier version of tensorflow, which does install cudnn and cudatoolkit, then upgrade with pip

\n
conda install tensorflow-gpu=2.1\npip install tensorflow-gpu==2.3.1\n
\n

(2.4.0 uses cuda 11.0 and cudnn 8.0, however cudnn 8.0 is not in anaconda as of 16/12/2020)

\n

Edit: please also see @GZ0's answer, which links to a github discussion with a one-line solution

\n", + "system": "" + }, + { + "instruction": "How to install libcusolver.so.11", + "input": "", + "output": "

If you want a concrete solution, just find libcusolver.so.10 on your machine and create a link to libcusolver.so.11:

\n

Following command solved issue for me:

\n
sudo ln -s /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcusolver.so.10 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcusolver.so.11\n
\n

Credit to: https://github.com/tensorflow/tensorflow/issues/43947

\n", + "system": "" + }, + { + "instruction": "Replacing placeholder for tensorflow v2", + "input": "", + "output": "

Make your code work with TF 2.0

\n

Below is a sample code which you can use with TF 2.0.\nIt relies on the compatibility API\nthat is accessible as tensorflow.compat.v1, and requires to disable v2 behaviors.\nI don't know if it behaves as you expected.\nIf not, then provide us more explanation of what you try to achieve.

\n
import tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\n\n@tf.function\ndef construct_graph(graph_dict, inputs, outputs):\n    queue = inputs[:]\n    make_dict = {}\n    for key, val in graph_dict.items():\n        if key in inputs:\n            make_dict[key] = tf.placeholder(tf.float32, name=key)\n        else:\n            make_dict[key] = None\n    # Breadth-First search of graph starting from inputs\n    while len(queue) != 0:\n        cur = graph_dict[queue[0]]\n        for outg in cur["outgoing"]:\n            if make_dict[outg[0]]: # If discovered node, do add/multiply operation\n                make_dict[outg[0]] = tf.add(make_dict[outg[0]], tf.multiply(outg[1], make_dict[queue[0]]))\n            else: # If undiscovered node, input is just coming in multiplied and add outgoing to queue\n                make_dict[outg[0]] = tf.multiply(make_dict[queue[0]], outg[1])\n                for outgo in graph_dict[outg[0]]["outgoing"]:\n                    queue.append(outgo[0])\n        queue.pop(0)\n    # Returns one data graph for each output\n    return [make_dict[x] for x in outputs]\n\ndef main():\n    graph_def = {\n        "B": {\n            "incoming": [],\n            "outgoing": [("A", 1.0)]\n        },\n        "C": {\n            "incoming": [],\n            "outgoing": [("A", 1.0)]\n        },\n        "A": {\n            "incoming": [("B", 2.0), ("C", -1.0)],\n            "outgoing": [("D", 3.0)]\n        },\n        "D": {\n            "incoming": [("A", 2.0)],\n            "outgoing": []\n        }\n    }\n    outputs = construct_graph(graph_def, ["B", "C"], ["A"])\n    print(outputs)\n\nif __name__ == "__main__":\n    main()\n
\n
[<tf.Tensor 'PartitionedCall:0' shape=<unknown> dtype=float32>]\n
\n

\u00a0Migrate your code to TF 2.0

\n

While the above snippet is valid, it is still tied to TF 1.0.\nTo migrate it to TF 2.0 you have to refactor a little bit your code.

\n

Instead of returning a list of tensors, which were callables with TF 1.0, I advise you to return a list of\nkeras.layers.Model.

\n

Below is a working example:

\n
import tensorflow as tf\n\ndef construct_graph(graph_dict, inputs, outputs):\n    queue = inputs[:]\n    make_dict = {}\n    for key, val in graph_dict.items():\n        if key in inputs:\n            # Use keras.Input instead of placeholders\n            make_dict[key] = tf.keras.Input(name=key, shape=(), dtype=tf.dtypes.float32)\n        else:\n            make_dict[key] = None\n    # Breadth-First search of graph starting from inputs\n    while len(queue) != 0:\n        cur = graph_dict[queue[0]]\n        for outg in cur["outgoing"]:\n            if make_dict[outg[0]] is not None: # If discovered node, do add/multiply operation\n                make_dict[outg[0]] = tf.keras.layers.add([\n                    make_dict[outg[0]],\n                    tf.keras.layers.multiply(\n                        [[outg[1]], make_dict[queue[0]]],\n                    )],\n                )\n            else: # If undiscovered node, input is just coming in multiplied and add outgoing to queue\n                make_dict[outg[0]] = tf.keras.layers.multiply(\n                    [make_dict[queue[0]], [outg[1]]]\n                )\n                for outgo in graph_dict[outg[0]]["outgoing"]:\n                    queue.append(outgo[0])\n        queue.pop(0)\n    # Returns one data graph for each output\n    model_inputs = [make_dict[key] for key in inputs]\n    model_outputs = [make_dict[key] for key in outputs]\n    return [tf.keras.Model(inputs=model_inputs, outputs=o) for o in model_outputs]\n\ndef main():\n    graph_def = {\n        "B": {\n            "incoming": [],\n            "outgoing": [("A", 1.0)]\n        },\n        "C": {\n            "incoming": [],\n            "outgoing": [("A", 1.0)]\n        },\n        "A": {\n            "incoming": [("B", 2.0), ("C", -1.0)],\n            "outgoing": [("D", 3.0)]\n        },\n        "D": {\n            "incoming": [("A", 2.0)],\n            "outgoing": []\n        }\n    }\n    outputs = construct_graph(graph_def, ["B", "C"], ["A"])\n    print("Builded models:", outputs)\n    for o in outputs:\n        o.summary(120)\n        print("Output:", o((1.0, 1.0)))\n\nif __name__ == "__main__":\n    main()\n
\n

What to notice here?

\n\n

Here is the output of the code.

\n
Builded models: [<tensorflow.python.keras.engine.training.Model object at 0x7fa0b49f0f50>]\nModel: "model"\n________________________________________________________________________________________________________________________\nLayer (type)                           Output Shape               Param #       Connected to                            \n========================================================================================================================\nB (InputLayer)                         [(None,)]                  0                                                     \n________________________________________________________________________________________________________________________\nC (InputLayer)                         [(None,)]                  0                                                     \n________________________________________________________________________________________________________________________\ntf_op_layer_mul (TensorFlowOpLayer)    [(None,)]                  0             B[0][0]                                 \n________________________________________________________________________________________________________________________\ntf_op_layer_mul_1 (TensorFlowOpLayer)  [(None,)]                  0             C[0][0]                                 \n________________________________________________________________________________________________________________________\nadd (Add)                              (None,)                    0             tf_op_layer_mul[0][0]                   \n                                                                                tf_op_layer_mul_1[0][0]                 \n========================================================================================================================\nTotal params: 0\nTrainable params: 0\nNon-trainable params: 0\n________________________________________________________________________________________________________________________\nOutput: tf.Tensor([2.], shape=(1,), dtype=float32)\n
\n", + "system": "" + }, + { + "instruction": "Force Anaconda to install tensorflow 1.14", + "input": "", + "output": "

You can force installing a certain version of any package found on Anaconda using simply an = operator with the package version attached to it.

\n\n

So, if you want to install tensorflow 1.14, you can run the following command:

\n\n
conda install -c conda-forge tensorflow=1.14\n
\n\n

You can replace 1.14 with any other versions. To see the available versions of tensorflow on Anaconda, you can run:

\n\n
conda search tensorflow\n
\n", + "system": "" + }, + { + "instruction": "ValueError: Shape mismatch: The shape of labels (received (15,)) should equal the shape of logits except for the last dimension (received (5, 3))", + "input": "", + "output": "

The difference between sparse_categorical_crossentropy and categorical_crossentropy is whether your targets are one-hot encoded.

\n\n

The shape of label batch is (5,3), which means it has been one-hot encoded. So you should use categorical_crossentropy loss function.

\n\n
model.compile(optimizer='adam',\n              loss='categorical_crossentropy',\n              metrics=['accuracy'])\n
\n", + "system": "" + }, + { + "instruction": "Why do some object detection neural networks return all zeros in OpenCV 4.1.0?", + "input": "", + "output": "

Some models expect normalized values for channel intensity. Normally, an image is represented in uint8 pixels (values ranging from 0 ~ 255). You would need to convert it to float32 (from -1 ~ 1). Basically, for such a model, your image would be interpreted as a blank picture (mostly all white pixels).

\n

Here's a python function that could be used to normalize the image:

\n
def processFrame(image):\n    img = cv2.resize(image, (input_width, input_height)) # input sizes of detector \n    img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n    # Normalize pixel values if using a floating model\n    img_rgb = (np.float32(img_rgb) - 127.5) / 127.5\n
\n", + "system": "" + }, + { + "instruction": "ValueError: Output tensors to a Model must be the output of a TensorFlow `Layer`", + "input": "", + "output": "

I have found a way to work around to solve the problem.\nFor anyone who encounters the same issue, you can use the Lambda layer to wrap your tensorflow operations, this is what I did:

\n\n
from tensorflow.python.keras.layers import Lambda;\n\ndef norm(fc2):\n\n    fc2_norm = K.l2_normalize(fc2, axis = 3);\n    illum_est = tf.reduce_sum(fc2_norm, axis = (1, 2));\n    illum_est = K.l2_normalize(illum_est);\n\n    return illum_est;\n\nillum_est = Lambda(norm)(fc2);\n
\n", + "system": "" + }, + { + "instruction": "How to add report_tensor_allocations_upon_oom to RunOptions in Keras", + "input": "", + "output": "

TF1 solution:

\n

Its not as hard as it seems, what you need to know is that according to the documentation, the **kwargs parameter passed to model.compile will be passed to session.run

\n

So you can do something like:

\n
import tensorflow as tf\nrun_opts = tf.RunOptions(report_tensor_allocations_upon_oom = True)\n\nmodel.compile(loss = "...", optimizer = "...", metrics = "..", options = run_opts)\n
\n

And it should be passed directly each time session.run is called.

\n

TF2:

\n

The solution above works only for tf1. For tf2, unfortunately, it appears there is no easy solution yet.

\n", + "system": "" + }, + { + "instruction": "Darknet YOLO image size", + "input": "", + "output": "

You don't have to resize it, because Darknet will do it instead of you!

\n\n

It means you really don't need to do that and you can use different image sizes during your training. What you posted above is just network configuration. There should be full network definition as well. And the height and the width tell you what's the network resolution. And it also keeps aspect ratio, check e.g this.

\n", + "system": "" + }, + { + "instruction": "Keras early stopping callback error, val_loss metric not available", + "input": "", + "output": "

If the error only occurs when you use smaller datasets, you're very likely using datasets small enough to not have a single sample in the validation set.

\n\n

Thus it cannot calculate a validation loss.

\n", + "system": "" + }, + { + "instruction": "Feeding .npy (numpy files) into tensorflow data pipeline", + "input": "", + "output": "

It is actually possible to read directly NPY files with TensorFlow instead of TFRecords. The key pieces are tf.data.FixedLengthRecordDataset and tf.io.decode_raw, along with a look at the documentation of the NPY format. For simplicity, let's suppose that a float32 NPY file containing an array with shape (N, K) is given, and you know the number of features K beforehand, as well as the fact that it is a float32 array. An NPY file is just a binary file with a small header and followed by the raw array data (object arrays are different, but we're considering numbers now). In short, you can find the size of this header with a function like this:

\n
def npy_header_offset(npy_path):\n    with open(str(npy_path), 'rb') as f:\n        if f.read(6) != b'\\x93NUMPY':\n            raise ValueError('Invalid NPY file.')\n        version_major, version_minor = f.read(2)\n        if version_major == 1:\n            header_len_size = 2\n        elif version_major == 2:\n            header_len_size = 4\n        else:\n            raise ValueError('Unknown NPY file version {}.{}.'.format(version_major, version_minor))\n        header_len = sum(b << (8 * i) for i, b in enumerate(f.read(header_len_size)))\n        header = f.read(header_len)\n        if not header.endswith(b'\\n'):\n            raise ValueError('Invalid NPY file.')\n        return f.tell()\n
\n

With this you can create a dataset like this:

\n
import tensorflow as tf\n\nnpy_file = 'my_file.npy'\nnum_features = ...\ndtype = tf.float32\nheader_offset = npy_header_offset(npy_file)\ndataset = tf.data.FixedLengthRecordDataset([npy_file], num_features * dtype.size, header_bytes=header_offset)\n
\n

Each element of this dataset contains a long string of bytes representing a single example. You can now decode it to obtain an actual array:

\n
dataset = dataset.map(lambda s: tf.io.decode_raw(s, dtype))\n
\n

The elements will have indeterminate shape, though, because TensorFlow does not keep track of the length of the strings. You can just enforce the shape since you know the number of features:

\n
dataset = dataset.map(lambda s: tf.reshape(tf.io.decode_raw(s, dtype), (num_features,)))\n
\n

Similarly, you can choose to perform this step after batching, or combine it in whatever way you feel like.

\n

The limitation is that you had to know the number of features in advance. It is possible to extract it from the NumPy header, though, just a bit of a pain, and in any case very hardly from within TensorFlow, so the file names would need to be known in advance. Another limitation is that, as it is, the solution requires you to either use only one file per dataset or files that have the same header size, although if you know that all the arrays have the same size that should actually be the case.

\n

Admittedly, if one considers this kind of approach it may just be better to have a pure binary file without headers, and either hard code the number of features or read them from a different source...

\n", + "system": "" + }, + { + "instruction": "Tensor is not an element of this graph", + "input": "", + "output": "

Try first:

\n\n
import tensorflow as tf\ngraph = tf.get_default_graph()\n
\n\n

Then, when you need to use predict:

\n\n
with graph.as_default():\n     y = model.predict(X)\n
\n", + "system": "" + }, + { + "instruction": "difference between Tensorflow's Graph and GraphDef", + "input": "", + "output": "

Graph or Computional Graph is the core concept of tensorflow to present computation. When you use tensorflow, you firstly create you own Computation Graph and pass the Graph to tensorflow. How to do that? As you may know, tensorflow support many front-end programming languages, like Python, C++, Java and Go and the core language is C++; how do the other languages transform the Graph to C++? They use a tool called protobuf which can generate specific language stubs, that's where the GraphDef come from. It's a serialized version of Graph.

\n\n
\n

which one should I have to run a graph loaded from protobuf file (.pb)

\n
\n\n

You should read your *pb file using GraphDef and bind the GraphDef to a (default) Graph, then use a session to run the Graph for computation, like the following code:

\n\n
import tensorflow as tf\nfrom tensorflow.python.platform import gfile\nwith tf.Session() as sess:\n    model_filename ='PATH_TO_PB.pb'\n    with gfile.FastGFile(model_filename, 'rb') as f:\n        graph_def = tf.GraphDef()\n        graph_def.ParseFromString(f.read())\n        g_in = tf.import_graph_def(graph_def)\nLOGDIR='/logs/tests/1/'\ntrain_writer = tf.summary.FileWriter(LOGDIR)\ntrain_writer.add_graph(sess.graph)\n
\n", + "system": "" + }, + { + "instruction": "how to implement early stopping in tensorflow", + "input": "", + "output": "

Here is my implementation of the early stopping u can adapt it:

\n\n

The early stopping can be applied at certain stages of the training process, such as at the end of each epoch. Specifically; in my case; I monitor the test (validation) loss at each epoch and after the test loss has not improved after 20 epochs (self.require_improvement= 20) , the training is interrupted.

\n\n

You can set the max epochs to 10000 or 20000 or whatever you want (self.max_epochs = 10000).

\n\n
  self.require_improvement= 20\n  self.max_epochs = 10000\n
\n\n

Here is my training function where I use the early stopping:

\n\n

def train(self):

\n\n
# training data\n    train_input = self.Normalize(self.x_train)\n    train_output = self.y_train.copy()            \n#===============\n    save_sess=self.sess # this used to compare the result of previous sess with actual one\n# ===============\n  #costs history :\n    costs = []\n    costs_inter=[]\n# =================\n  #for early stopping :\n    best_cost=1000000 \n    stop = False\n    last_improvement=0\n# ================\n    n_samples = train_input.shape[0] # size of the training set\n# ===============\n   #train the mini_batches model using the early stopping criteria\n    epoch = 0\n    while epoch < self.max_epochs and stop == False:\n        #train the model on the traning set by mini batches\n        #suffle then split the training set to mini-batches of size self.batch_size\n        seq =list(range(n_samples))\n        random.shuffle(seq)\n        mini_batches = [\n            seq[k:k+self.batch_size]\n            for k in range(0,n_samples, self.batch_size)\n        ]\n\n        avg_cost = 0. # The average cost of mini_batches\n        step= 0\n\n        for sample in mini_batches:\n\n            batch_x = x_train.iloc[sample, :]\n            batch_y =train_output.iloc[sample, :]\n            batch_y = np.array(batch_y).flatten()\n\n            feed_dict={self.X: batch_x,self.Y:batch_y, self.is_train:True}\n\n            _, cost,acc=self.sess.run([self.train_step, self.loss_, self.accuracy_],feed_dict=feed_dict)\n            avg_cost += cost *len(sample)/n_samples \n            print('epoch[{}] step [{}] train -- loss : {}, accuracy : {}'.format(epoch,step, avg_cost, acc))\n            step += 100\n\n        #cost history since the last best cost\n        costs_inter.append(avg_cost)\n\n        #early stopping based on the validation set/ max_steps_without_decrease of the loss value : require_improvement\n        if avg_cost < best_cost:\n            save_sess= self.sess # save session\n            best_cost = avg_cost\n            costs +=costs_inter # costs history of the validatio set\n            last_improvement = 0\n            costs_inter= []\n        else:\n            last_improvement +=1\n        if last_improvement > self.require_improvement:\n            print(\"No improvement found during the ( self.require_improvement) last iterations, stopping optimization.\")\n            # Break out from the loop.\n            stop = True\n            self.sess=save_sess # restore session with the best cost\n\n        ## Run validation after every epoch : \n        print('---------------------------------------------------------')\n        self.y_validation = np.array(self.y_validation).flatten()\n        loss_valid, acc_valid = self.sess.run([self.loss_,self.accuracy_], \n                                              feed_dict={self.X: self.x_validation, self.Y: self.y_validation,self.is_train: True})\n        print(\"Epoch: {0}, validation loss: {1:.2f}, validation accuracy: {2:.01%}\".format(epoch + 1, loss_valid, acc_valid))\n        print('---------------------------------------------------------')\n\n        epoch +=1\n
\n\n

We can resume the important code here :

\n\n
def train(self):\n  ...\n      #costs history :\n        costs = []\n        costs_inter=[]\n      #for early stopping :\n        best_cost=1000000 \n        stop = False\n        last_improvement=0\n       #train the mini_batches model using the early stopping criteria\n        epoch = 0\n        while epoch < self.max_epochs and stop == False:\n            ...\n            for sample in mini_batches:\n            ...                   \n            #cost history since the last best cost\n            costs_inter.append(avg_cost)\n\n            #early stopping based on the validation set/ max_steps_without_decrease of the loss value : require_improvement\n            if avg_cost < best_cost:\n                save_sess= self.sess # save session\n                best_cost = avg_cost\n                costs +=costs_inter # costs history of the validatio set\n                last_improvement = 0\n                costs_inter= []\n            else:\n                last_improvement +=1\n            if last_improvement > self.require_improvement:\n                print(\"No improvement found during the ( self.require_improvement) last iterations, stopping optimization.\")\n                # Break out from the loop.\n                stop = True\n                self.sess=save_sess # restore session with the best cost\n            ...\n            epoch +=1\n
\n\n

Hope it will help someone :).

\n", + "system": "" + }, + { + "instruction": "How to properly use tf.metrics.accuracy?", + "input": "", + "output": "

TL;DR

\n\n

The accuracy function tf.metrics.accuracy calculates how often predictions matches labels based on two local variables it creates: total and count, that are used to compute the frequency with which logits matches labels.

\n\n
acc, acc_op = tf.metrics.accuracy(labels=tf.argmax(labels, 1), \n                                  predictions=tf.argmax(logits,1))\n\nprint(sess.run([acc, acc_op]))\nprint(sess.run([acc]))\n# Output\n#[0.0, 0.66666669]\n#[0.66666669]\n
\n\n\n\n

To understand why the acc returns 0.0, go through the details below.

\n\n
\n\n

Details using a simple example:

\n\n
logits = tf.placeholder(tf.int64, [2,3])\nlabels = tf.Variable([[0, 1, 0], [1, 0, 1]])\n\nacc, acc_op = tf.metrics.accuracy(labels=tf.argmax(labels, 1),   \n                                  predictions=tf.argmax(logits,1))\n
\n\n

Initialize the variables:

\n\n

Since metrics.accuracy creates two local variables total and count, we need to call local_variables_initializer() to initialize them.

\n\n
sess = tf.Session()\n\nsess.run(tf.local_variables_initializer())\nsess.run(tf.global_variables_initializer())\n\nstream_vars = [i for i in tf.local_variables()]\nprint(stream_vars)\n\n#[<tf.Variable 'accuracy/total:0' shape=() dtype=float32_ref>,\n# <tf.Variable 'accuracy/count:0' shape=() dtype=float32_ref>]\n
\n\n

Understanding update ops and accuracy calculation:

\n\n
print('acc:',sess.run(acc, {logits:[[0,1,0],[1,0,1]]}))\n#acc: 0.0\n\nprint('[total, count]:',sess.run(stream_vars)) \n#[total, count]: [0.0, 0.0]\n
\n\n

The above returns 0.0 for accuracy as total and count are zeros, inspite of giving matching inputs.

\n\n
print('ops:', sess.run(acc_op, {logits:[[0,1,0],[1,0,1]]})) \n#ops: 1.0\n\nprint('[total, count]:',sess.run(stream_vars)) \n#[total, count]: [2.0, 2.0]\n
\n\n

With the new inputs, the accuracy is calculated when the update op is called. Note: since all the logits and labels match, we get accuracy of 1.0 and the local variables total and count actually give total correctly predicted and the total comparisons made.

\n\n

Now we call accuracy with the new inputs (not the update ops):

\n\n
print('acc:', sess.run(acc,{logits:[[1,0,0],[0,1,0]]}))\n#acc: 1.0\n
\n\n

Accuracy call doesnt update the metrics with the new inputs, it just returns the value using the two local variables. Note: the logits and labels dont match in this case. Now calling update ops again:

\n\n
print('op:',sess.run(acc_op,{logits:[[0,1,0],[0,1,0]]}))\n#op: 0.75 \nprint('[total, count]:',sess.run(stream_vars)) \n#[total, count]: [3.0, 4.0]\n
\n\n

The metrics are updated to new inputs

\n\n
\n\n

For more information on how to use the metrics during training and how to reset them during validation, can be found here.

\n", + "system": "" + }, + { + "instruction": "What is the advantage of using an InputLayer (or an Input) in a Keras model with Tensorflow tensors?", + "input": "", + "output": "

It would seem that InputLayer has some uses:

\n\n", + "system": "" + }, + { + "instruction": "tensorboard: command not found", + "input": "", + "output": "

You could call tensorboard as a python module like this:

\n\n
python3 -m tensorboard.main --logdir=~/my/training/dir\n
\n\n

or add this to your .profile

\n\n

alias tensorboard='python3 -m tensorboard.main'

\n", + "system": "" + }, + { + "instruction": "Tensorflow: How does tf.get_variable work?", + "input": "", + "output": "

tf.get_variable(name) creates a new variable called name (or add _ if name already exists in the current scope) in the tensorflow graph.

\n

In your example, you're creating a python variable called var1.

\n

The name of that variable in the tensorflow graph is not ** var1, but is Variable:0.

\n

Every node you define has its own name that you can specify or let tensorflow give a default (and always different) one. You can see the name value accessing the name property of the python variable. (ie print(var1.name)).

\n

On your second line, you're defining a Python variable var2 whose name in the tensorflow graph is var1.

\n

The script

\n
import tensorflow as tf\n\nvar1 = tf.Variable(3.,dtype=tf.float64)\nprint(var1.name)\nvar2 = tf.get_variable("var1",[],dtype=tf.float64)\nprint(var2.name)\n
\n

In fact prints:

\n
Variable:0\nvar1:0\n
\n

If you, instead, want to define a variable (node) called var1 in the tensorflow graph and then getting a reference to that node, you cannot simply use tf.get_variable("var1"), because it will create a new different variable valled var1_1.

\n

This script

\n
var1 = tf.Variable(3.,dtype=tf.float64, name="var1")\nprint(var1.name)\nvar2 = tf.get_variable("var1",[],dtype=tf.float64)\nprint(var2.name)\n
\n

prints:

\n
var1:0\nvar1_1:0\n
\n

If you want to create a reference to the node var1, you first:

\n
    \n
  1. Have to replace tf.Variable with tf.get_variable. The variables created with tf.Variable can't be shared, while the latter can.

    \n
  2. \n
  3. Know what the scope of the var1 is and allow the reuse of that scope when declaring the reference.

    \n
  4. \n
\n

Looking at the code is the better way for understanding

\n
import tensorflow as tf\n\n#var1 = tf.Variable(3.,dtype=tf.float64, name="var1")\nvar1 = tf.get_variable(initializer=tf.constant_initializer(3.), dtype=tf.float64, name="var1", shape=())\ncurrent_scope = tf.contrib.framework.get_name_scope()\nprint(var1.name)\nwith tf.variable_scope(current_scope, reuse=True):\n    var2 = tf.get_variable("var1",[],dtype=tf.float64)\n    print(var2.name)\n
\n

outputs:

\n
var1:0\nvar1:0\n
\n", + "system": "" + }, + { + "instruction": "How does one train multiple models in a single script in TensorFlow when there are GPUs present?", + "input": "", + "output": "

I think that running all models in one single script can be bad practice in the long term (see my suggestion below for a better alternative). However, if you would like to do it, here is a solution: You can encapsulate your TF session into a process with the multiprocessing module, this will make sure TF releases the session memory once the process is done. Here is a code snippet:

\n\n
from multiprocessing import Pool\nimport contextlib\ndef my_model((param1, param2, param3)): # Note the extra (), required by the pool syntax\n    < your code >\n\nnum_pool_worker=1 # can be bigger than 1, to enable parallel execution \nwith contextlib.closing(Pool(num_pool_workers)) as po: # This ensures that the processes get closed once they are done\n     pool_results = po.map_async(my_model,\n                                    ((param1, param2, param3)\n                                     for param1, param2, param3 in params_list))\n     results_list = pool_results.get()\n
\n\n

Note from OP: The random number generator seed does not reset automatically with the multi-processing library if you choose to use it. Details here: Using python multiprocessing with different random seed for each process

\n\n

About TF resource allocation: Usually TF allocates much more resources than it needs. Many times you can restrict each process to use a fraction of the total GPU memory, and discover through trial and error the fraction your script requires.

\n\n

You can do it with the following snippet

\n\n
gpu_memory_fraction = 0.3 # Choose this number through trial and error\ngpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_memory_fraction,)\nsession_config = tf.ConfigProto(gpu_options=gpu_options)\nsess = tf.Session(config=session_config, graph=graph)\n
\n\n

Note that sometimes TF increases the memory usage in order to accelerate the execution. Therefore, reducing the memory usage might make your model run slower.

\n\n

Answers to the new questions in your edit/comments:

\n\n
    \n
  1. Yes, Tensorflow will be re-allocated every time a new process is created, and cleared once a process ends.

  2. \n
  3. The for-loop in your edit should also do the job. I suggest to use Pool instead, because it will enable you to run several models concurrently on a single GPU. See my notes about setting gpu_memory_fraction and \"choosing the maximal number of processes\". Also note that: (1) The Pool map runs the loop for you, so you don't need an outer for-loop once you use it. (2) In your example, you should have something like mdl=get_model(args) before calling train()

  4. \n
  5. Weird tuple parenthesis: Pool only accepts a single argument, therefore we use a tuple to pass multiple arguments. See multiprocessing.pool.map and function with two arguments for more details. As suggested in one answer, you can make it more readable with

    \n\n
    def train_mdl(params):\n    (x,y)=params\n    < your code >\n
  6. \n
  7. As @Seven suggested, you can use CUDA_VISIBLE_DEVICES environment variable to choose which GPU to use for your process. You can do it from within your python script using the following on the beginning of the process function (train_mdl).

    \n\n
    import os # the import can be on the top of the python script\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"{}\".format(gpu_id)\n
  8. \n
\n\n

A better practice for executing your experiments would be to isolate your training/evaluation code from the hyper parameters/ model search code.\nE.g. have a script named train.py, which accepts a specific combination of hyper parameters and references to your data as arguments, and executes training for a single model.

\n\n

Then, to iterate through the all the possible combinations of parameters you can use a simple task (jobs) queue, and submit all the possible combinations of hyper-parametrs as separate jobs. The task queue will feed your jobs one at a time to your machine. Usually, you can also set the queue to execute number of processes concurrently (see details below).

\n\n

Specifically, I use task spooler, which is super easy to install and handful (doesn't requires admin privileges, details below).

\n\n

Basic usage is (see notes below about task spooler usage):

\n\n
ts <your-command>\n
\n\n

In practice, I have a separate python script that manages my experiments, set all the arguments per specific experiment and send the jobs to the ts queue.

\n\n

Here are some relevant snippets of python code from my experiments manager:

\n\n

run_bash executes a bash command

\n\n
def run_bash(cmd):\n    p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, executable='/bin/bash')\n    out = p.stdout.read().strip()\n    return out  # This is the stdout from the shell command\n
\n\n

The next snippet sets the number of concurrent processes to be run (see note below about choosing the maximal number of processes):

\n\n
max_job_num_per_gpu = 2\nrun_bash('ts -S %d'%max_job_num_per_gpu)\n
\n\n

The next snippet iterates through a list of all combinations of hyper params / model params. Each element of the list is a dictionary, where the keys are the command line arguments for the train.py script

\n\n
for combination_dict in combinations_list:\n\n    job_cmd = 'python train.py ' + '  '.join(\n            ['--{}={}'.format(flag, value) for flag, value in combination_dict.iteritems()])\n\n    submit_cmd = \"ts bash -c '%s'\" % job_cmd\n    run_bash(submit_cmd)\n
\n\n

A note about about choosing the maximal number of processes:

\n\n

If you are short on GPUs, you can use gpu_memory_fraction you found, to set the number of processes as max_job_num_per_gpu=int(1/gpu_memory_fraction)

\n\n

Notes about task spooler (ts):

\n\n
    \n
  1. You could set the number of concurrent processes to run (\"slots\") with:

    \n\n

    ts -S <number-of-slots>

  2. \n
  3. Installing ts doesn't requires admin privileges. You can download and compile it from source with a simple make, add it to your path and you're done.

  4. \n
  5. You can set up multiple queues (I use it for multiple GPUs), with

    \n\n

    TS_SOCKET=<path_to_queue_name> ts <your-command>

    \n\n

    e.g.

    \n\n

    TS_SOCKET=/tmp/socket-ts.gpu_queue_1 ts <your-command>

    \n\n

    TS_SOCKET=/tmp/socket-ts.gpu_queue_2 ts <your-command>

  6. \n
  7. See here for further usage example

  8. \n
\n\n

A note about automatically setting the path names and file names:\nOnce you separate your main code from the experiment manager, you will need an efficient way to generate file names and directory names, given the hyper-params. I usually keep my important hyper params in a dictionary and use the following function to generate a single chained string from the dictionary key-value pairs.\nHere are the functions I use for doing it:

\n\n
def build_string_from_dict(d, sep='%'):\n    \"\"\"\n     Builds a string from a dictionary.\n     Mainly used for formatting hyper-params to file names.\n     Key-value pairs are sorted by the key name.\n\n    Args:\n        d: dictionary\n\n    Returns: string\n    :param d: input dictionary\n    :param sep: key-value separator\n\n    \"\"\"\n\n    return sep.join(['{}={}'.format(k, _value2str(d[k])) for k in sorted(d.keys())])\n\n\ndef _value2str(val):\n    if isinstance(val, float): \n        # %g means: \"Floating point format.\n        # Uses lowercase exponential format if exponent is less than -4 or not less than precision,\n        # decimal format otherwise.\"\n        val = '%g' % val\n    else:\n        val = '{}'.format(val)\n    val = re.sub('\\.', '_', val)\n    return val\n
\n", + "system": "" + }, + { + "instruction": "Converting from Pandas dataframe to TensorFlow tensor object", + "input": "", + "output": "

Here is one solution I found that works on Google Colab:

\n
import pandas as pd\nimport tensorflow as tf\n#Read the file to a pandas object\ndata=pd.read_csv('filedir')\n#convert the pandas object to a tensor\ndata=tf.convert_to_tensor(data)\ntype(data)\n
\n

This will print something like:

\n
tensorflow.python.framework.ops.Tensor\n
\n", + "system": "" + }, + { + "instruction": "Hyperparameter optimization for Deep Learning Structures using Bayesian Optimization", + "input": "", + "output": "
\n

Although I am still not fully understanding the optimization\n algorithm, I feed like it will help me greatly.

\n
\n\n

First up, let me briefly explain this part.\nBayesian Optimization methods aim to deal with exploration-exploitation trade off in the multi-armed bandit problem. In this problem, there is an unknown function, which we can evaluate in any point, but each evaluation costs (direct penalty or opportunity cost), and the goal is to find its maximum using as few trials as possible. Basically, the trade off is this: you know the function in a finite set of points (of which some are good and some are bad), so you can try an area around the current local maximum, hoping to improve it (exploitation), or you can try a completely new area of space, that can potentially be much better or much worse (exploration), or somewhere in between.

\n\n

Bayesian Optimization methods (e.g. PI, EI, UCB), build a model of the target function using a Gaussian Process (GP) and at each step choose the most \"promising\" point based on their GP model (note that \"promising\" can be defined differently by different particular methods).

\n\n

Here's an example:

\n\n

\"sin(x)*x\"

\n\n

The true function is f(x) = x * sin(x) (black curve) on [-10, 10] interval. Red dots represent each trial, red curve is the GP mean, blue curve is the mean plus or minus one standard deviation. \nAs you can see, the GP model doesn't match the true function everywhere, but the optimizer fairly quickly identified the \"hot\" area around -8 and started to exploit it.

\n\n
\n

How do I set up the Bayesian Optimization with regards to a deep\n network?

\n
\n\n

In this case, the space is defined by (possibly transformed) hyperparameters, usually a multidimensional unit hypercube.

\n\n

For example, suppose you have three hyperparameters: a learning rate \u03b1 in [0.001, 0.01], the regularizer \u03bb in [0.1, 1] (both continuous) and the hidden layer size N in [50..100] (integer). The space for optimization is a 3-dimensional cube [0, 1]*[0, 1]*[0, 1]. Each point (p0, p1, p2) in this cube corresponds to a trinity (\u03b1, \u03bb, N) by the following transformation:

\n\n
p0 -> \u03b1 = 10**(p0-3)\np1 -> \u03bb = 10**(p1-1)\np2 -> N = int(p2*50 + 50)\n
\n\n
\n

What is the function I am trying to optimize? Is it the cost of the\n validation set after N epochs?

\n
\n\n

Correct, the target function is neural network validation accuracy. Clearly, each evaluation is expensive, because it requires at least several epochs for training.

\n\n

Also note that the target function is stochastic, i.e. two evaluations on the same point may slightly differ, but it's not a blocker for Bayesian Optimization, though it obviously increases the uncertainty.

\n\n
\n

Is spearmint a good starting point for this task? Any other\n suggestions for this task?

\n
\n\n

spearmint is a good library, you can definitely work with that. I can also recommend hyperopt.

\n\n

In my own research, I ended up writing my own tiny library, basically for two reasons: I wanted to code exact Bayesian method to use (in particular, I found a portfolio strategy of UCB and PI converged faster than anything else, in my case); plus there is another technique that can save up to 50% of training time called learning curve prediction (the idea is to skip full learning cycle when the optimizer is confident the model doesn't learn as fast as in other areas). I'm not aware of any library that implements this, so I coded it myself, and in the end it paid off. If you're interested, the code is on GitHub.

\n", + "system": "" + }, + { + "instruction": "Tensorflow Confusion Matrix in TensorBoard", + "input": "", + "output": "

Here is something I have put together That works reasonably well. Still need to adjust a few things like the tick placements etc.

\n\n

\"Confusion

\n\n

Here is the function that will pretty much do everything for you.

\n\n
from textwrap import wrap\nimport re\nimport itertools\nimport tfplot\nimport matplotlib\nimport numpy as np\nfrom sklearn.metrics import confusion_matrix\n\n\n\ndef plot_confusion_matrix(correct_labels, predict_labels, labels, title='Confusion matrix', tensor_name = 'MyFigure/image', normalize=False):\n''' \nParameters:\n    correct_labels                  : These are your true classification categories.\n    predict_labels                  : These are you predicted classification categories\n    labels                          : This is a lit of labels which will be used to display the axix labels\n    title='Confusion matrix'        : Title for your matrix\n    tensor_name = 'MyFigure/image'  : Name for the output summay tensor\n\nReturns:\n    summary: TensorFlow summary \n\nOther itema to note:\n    - Depending on the number of category and the data , you may have to modify the figzie, font sizes etc. \n    - Currently, some of the ticks dont line up due to rotations.\n'''\ncm = confusion_matrix(correct_labels, predict_labels, labels=labels)\nif normalize:\n    cm = cm.astype('float')*10 / cm.sum(axis=1)[:, np.newaxis]\n    cm = np.nan_to_num(cm, copy=True)\n    cm = cm.astype('int')\n\nnp.set_printoptions(precision=2)\n###fig, ax = matplotlib.figure.Figure()\n\nfig = matplotlib.figure.Figure(figsize=(7, 7), dpi=320, facecolor='w', edgecolor='k')\nax = fig.add_subplot(1, 1, 1)\nim = ax.imshow(cm, cmap='Oranges')\n\nclasses = [re.sub(r'([a-z](?=[A-Z])|[A-Z](?=[A-Z][a-z]))', r'\\1 ', x) for x in labels]\nclasses = ['\\n'.join(wrap(l, 40)) for l in classes]\n\ntick_marks = np.arange(len(classes))\n\nax.set_xlabel('Predicted', fontsize=7)\nax.set_xticks(tick_marks)\nc = ax.set_xticklabels(classes, fontsize=4, rotation=-90,  ha='center')\nax.xaxis.set_label_position('bottom')\nax.xaxis.tick_bottom()\n\nax.set_ylabel('True Label', fontsize=7)\nax.set_yticks(tick_marks)\nax.set_yticklabels(classes, fontsize=4, va ='center')\nax.yaxis.set_label_position('left')\nax.yaxis.tick_left()\n\nfor i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n    ax.text(j, i, format(cm[i, j], 'd') if cm[i,j]!=0 else '.', horizontalalignment=\"center\", fontsize=6, verticalalignment='center', color= \"black\")\nfig.set_tight_layout(True)\nsummary = tfplot.figure.to_summary(fig, tag=tensor_name)\nreturn summary\n
\n\n#\n\n

And here is the rest of the code that you will need to call this functions.

\n\n
''' confusion matrix summaries '''\nimg_d_summary_dir = os.path.join(checkpoint_dir, \"summaries\", \"img\")\nimg_d_summary_writer = tf.summary.FileWriter(img_d_summary_dir, sess.graph)\nimg_d_summary = plot_confusion_matrix(correct_labels, predict_labels, labels, tensor_name='dev/cm')\nimg_d_summary_writer.add_summary(img_d_summary, current_step)\n
\n\n

Confuse away!!!

\n", + "system": "" + }, + { + "instruction": "Tensorflow mean squared error loss function", + "input": "", + "output": "

I would say that the third equation is different, while the 1st and 2nd are formally the same but behave differently due to numerical concerns.

\n\n

I think that the 3rd equation (using l2_loss) is just returning 1/2 of the squared Euclidean norm, that is, the sum of the element-wise square of the input, which is x=prediction-Y. You are not dividing by the number of samples anywhere. Thus, if you have a very large number of samples, the computation may overflow (returning Inf).

\n\n

The other two are formally the same, computing the mean of the element-wise squared x tensor. However, while the documentation does not specify it explicitly, it is very likely that reduce_mean uses an algorithm adapted to avoid overflowing with very large number of samples. In other words, it likely does not try to sum everything first and then divide by N, but use some kind of rolling mean that can adapt to an arbitrary number of samples without necessarily causing an overflow.

\n", + "system": "" + }, + { + "instruction": "Dot product of two vectors in tensorflow", + "input": "", + "output": "

One of the easiest way to calculate dot product between two tensors (vector is 1D tensor) is using tf.tensordot

\n\n
a = tf.placeholder(tf.float32, shape=(5))\nb = tf.placeholder(tf.float32, shape=(5))\n\ndot_a_b = tf.tensordot(a, b, 1)\n\nwith tf.Session() as sess:\n    print(dot_a_b.eval(feed_dict={a: [1, 2, 3, 4, 5], b: [6, 7, 8, 9, 10]}))\n# results: 130.0\n
\n", + "system": "" + }, + { + "instruction": "Show training and validation accuracy in TensorFlow using same graph", + "input": "", + "output": "

You can reuse the the accuracy node but you need to use two different SummaryWriters, one for the training runs and one for the test data. Also you have to assign the scalar summary for accuracy to a variable.

\n\n
accuracy_summary = tf.scalar_summary(\"Training Accuracy\", accuracy)\ntf.scalar_summary(\"SomethingElse\", foo)\nsummary_op = tf.merge_all_summaries()\nsummaries_dir = '/me/mydir/'\ntrain_writer = tf.train.SummaryWriter(summaries_dir + '/train', sess.graph)\ntest_writer = tf.train.SummaryWriter(summaries_dir + '/test')\n
\n\n

Then in your training loop you have the normal training and record your summaries with the train_writer. In addition you run the graph on the test set each 100th iteration and record only the accuracy summary with the test_writer.

\n\n
# Record train set summaries, and train\nsummary, _ = sess.run([summary_op, train_step], feed_dict=...)\ntrain_writer.add_summary(summary, n)\nif n % 100 == 0:  # Record summaries and test-set accuracy\n  summary, acc = sess.run([accuracy_summary, accuracy], feed_dict=...)\n  test_writer.add_summary(summary, n)\n  print('Accuracy at step %s: %s' % (n, acc))\n
\n\n

You can then point TensorBoard to the parent directory (summaries_dir) and it will load both data sets.

\n\n

This can be also found in the TensorFlow HowTo's https://www.tensorflow.org/versions/r0.11/how_tos/summaries_and_tensorboard/index.html

\n", + "system": "" + }, + { + "instruction": "Cannot import keras after installation", + "input": "", + "output": "

Diagnose

\n\n

If you have pip installed (you should have it until you use Python 3.5), list the installed Python packages, like this:

\n\n
$ pip list | grep -i keras\nKeras (1.1.0)\n
\n\n

If you don\u2019t see Keras, it means that the previous installation failed or is incomplete (this lib has this dependancies: numpy (1.11.2), PyYAML (3.12), scipy (0.18.1), six (1.10.0), and Theano (0.8.2).)

\n\n

Consult the pip.log to see what\u2019s wrong.

\n\n

You can also display your Python path like this:

\n\n
$ python3 -c 'import sys, pprint; pprint.pprint(sys.path)'\n['',\n '/Library/Frameworks/Python.framework/Versions/3.5/lib/python35.zip',\n '/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5',\n '/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/plat-darwin',\n '/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/lib-dynload',\n '/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages']\n
\n\n

Make sure the Keras library appears in the /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages path (the path is different on Ubuntu).

\n\n

If not, try do uninstall it, and retry installation:

\n\n
$ pip uninstall Keras\n
\n\n

Use a virtualenv

\n\n

It\u2019s a bad idea to use and pollute your system-wide Python. I recommend using a virtualenv (see this guide).

\n\n

The best usage is to create a virtualenv directory (in your home, for instance), and store your virtualenvs in:

\n\n
cd virtualenv/\nvirtualenv -p python3.5 py-keras\nsource py-keras/bin/activate\npip install -q -U pip setuptools wheel\n
\n\n

Then install Keras:

\n\n
pip install keras\n
\n\n

You get:

\n\n
$ pip list\nKeras (1.1.0)\nnumpy (1.11.2)\npip (8.1.2)\nPyYAML (3.12)\nscipy (0.18.1)\nsetuptools (28.3.0)\nsix (1.10.0)\nTheano (0.8.2)\nwheel (0.30.0a0)\n
\n\n

But, you also need to install extra libraries, like Tensorflow:

\n\n
$ python -c \"import keras\"\nUsing TensorFlow backend.\nTraceback (most recent call last):\n  ...\nImportError: No module named 'tensorflow'\n
\n\n

The installation guide of TesnsorFlow is here: https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#pip-installation

\n", + "system": "" + }, + { + "instruction": "How to perform k-fold cross validation with tensorflow?", + "input": "", + "output": "

I know this question is old but in case someone is looking to do something similar, expanding on ahmedhosny's answer:

\n\n

The new tensorflow datasets API has the ability to create dataset objects using python generators, so along with scikit-learn's KFold one option can be to create a dataset from the KFold.split() generator:

\n\n
import numpy as np\n\nfrom sklearn.model_selection import LeaveOneOut,KFold\n\nimport tensorflow as tf\nimport tensorflow.contrib.eager as tfe\ntf.enable_eager_execution()\n\nfrom sklearn.datasets import load_iris\ndata = load_iris()\nX=data['data']\ny=data['target']\n\ndef make_dataset(X_data,y_data,n_splits):\n\n    def gen():\n        for train_index, test_index in KFold(n_splits).split(X_data):\n            X_train, X_test = X_data[train_index], X_data[test_index]\n            y_train, y_test = y_data[train_index], y_data[test_index]\n            yield X_train,y_train,X_test,y_test\n\n    return tf.data.Dataset.from_generator(gen, (tf.float64,tf.float64,tf.float64,tf.float64))\n\ndataset=make_dataset(X,y,10)\n
\n\n

Then one can iterate through the dataset either in the graph based tensorflow or using eager execution. Using eager execution:

\n\n
for X_train,y_train,X_test,y_test in tfe.Iterator(dataset):\n    ....\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow and Multiprocessing: Passing Sessions", + "input": "", + "output": "

You can't use Python multiprocessing to pass a TensorFlow Session into a multiprocessing.Pool in the straightfoward way because the Session object can't be pickled (it's fundamentally not serializable because it may manage GPU memory and state like that).

\n\n

I'd suggest parallelizing the code using actors, which are essentially the parallel computing analog of \"objects\" and use used to manage state in the distributed setting.

\n\n

Ray is a good framework for doing this. You can define a Python class which manages the TensorFlow Session and exposes a method for running your simulation.

\n\n
import ray\nimport tensorflow as tf\n\nray.init()\n\n@ray.remote\nclass Simulator(object):\n    def __init__(self):\n        self.sess = tf.Session()\n        self.simple_model = tf.constant([1.0])\n\n    def simulate(self):\n        return self.sess.run(self.simple_model)\n\n# Create two actors.\nsimulators = [Simulator.remote() for _ in range(2)]\n\n# Run two simulations in parallel.\nresults = ray.get([s.simulate.remote() for s in simulators])\n
\n\n

Here are a few more examples of parallelizing TensorFlow with Ray.

\n\n

See the Ray documentation. Note that I'm one of the Ray developers.

\n", + "system": "" + }, + { + "instruction": "Is there a way of determining how much GPU memory is in use by TensorFlow?", + "input": "", + "output": "

(1) There is some limited support with Timeline for logging memory allocations. Here is an example for its usage:

\n\n
    run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)\n    run_metadata = tf.RunMetadata()\n    summary, _ = sess.run([merged, train_step],\n                          feed_dict=feed_dict(True),\n                          options=run_options,\n                          run_metadata=run_metadata)\n    train_writer.add_run_metadata(run_metadata, 'step%03d' % i)\n    train_writer.add_summary(summary, i)\n    print('Adding run metadata for', i)\n    tl = timeline.Timeline(run_metadata.step_stats)\n    print(tl.generate_chrome_trace_format(show_memory=True))\n    trace_file = tf.gfile.Open(name='timeline', mode='w')\n    trace_file.write(tl.generate_chrome_trace_format(show_memory=True))\n
\n\n

You can give this code a try with the MNIST example (mnist with summaries)

\n\n

This will generate a tracing file named timeline, which you can open with chrome://tracing. Note that this only gives an approximated GPU memory usage statistics. It basically simulated a GPU execution, but doesn't have access to the full graph metadata. It also can't know how many variables have been assigned to the GPU.

\n\n

(2) For a very coarse measure of GPU memory usage, nvidia-smi will show the total device memory usage at the time you run the command.

\n\n

nvprof can show the on-chip shared memory usage and register usage at the CUDA kernel level, but doesn't show the global/device memory usage.

\n\n

Here is an example command: nvprof --print-gpu-trace matrixMul

\n\n

And more details here:\nhttp://docs.nvidia.com/cuda/profiler-users-guide/#abstract

\n", + "system": "" + }, + { + "instruction": "Google Colab error: Import "tensorflow.keras.models" could not be resolved(reportMissingImports)", + "input": "", + "output": "

this worked for me.

\n
from tensorflow import keras\nfrom keras.layers import Dense\nfrom keras.models import Sequential, load_model\n
\n", + "system": "" + }, + { + "instruction": "What is the difference between MaxPool and MaxPooling layers in Keras?", + "input": "", + "output": "

They are basically the same thing (i.e. aliases of each other). For future readers who might want to know how this could be determined: go to the documentation page of the layer (you can use the list here) and click on "View aliases". This is then accompanied by a blue plus sign (+).

\n

For example, if you go to MaxPool2D documentation and do this, you will find MaxPooling2D in the list of aliases of this layer as follow:

\n

\"MaxPool

\n", + "system": "" + }, + { + "instruction": "what does class_mode parameter in Keras image_gen.flow_from_directory() signify?", + "input": "", + "output": "

class_mode: One of "categorical", "binary", "sparse", "input", or None. Default: "categorical". Determines the type of label arrays that are returned: - "categorical" will be 2D one-hot encoded labels, - "binary" will be 1D binary labels, "sparse" will be 1D integer labels, - "input" will be images identical to input images (mainly used to work with autoencoders). - If None, no labels are returned (the generator will only yield batches of image data, which is useful to use with model.predict_generator()). Please note that in case of class_mode None, the data still needs to reside in a subdirectory of directory for it to work correctly.

\n", + "system": "" + }, + { + "instruction": "Issue with add method in tensorflow : AttributeError: module 'tensorflow.python.framework.ops' has no attribute '_TensorLike'", + "input": "", + "output": "

For me, the fix was importing

\n\n
from tensorflow.keras import Sequential\nfrom tensorflow.keras.layers import Conv2D, Flatten, Dense\n
\n\n

instead of

\n\n
from keras import Sequential\nfrom keras.layers import Conv2D, Flatten, Dense\n
\n\n

There seems to be some weird compatibility issues between keras and tensorflow.keras

\n", + "system": "" + }, + { + "instruction": "How to downgrade tensorflow version in colab?", + "input": "", + "output": "

You can downgrade Tensorflow to a previous version without GPU support on Google Colab. I ran:

\n
!pip install tensorflow==1.14.0\nimport tensorflow as tf\nprint(tf.__version__)\n
\n

which initially returned

\n
2.0.0-dev20190130\n
\n

but when I returned to it after a few hours, I got the version I requested:

\n
1.14.0\n
\n

Trying to downgrade to a version with GPU support:

\n
!pip install tensorflow-gpu==1.14.0\n
\n

requires restarting the runtime and fails, as importing import tensorflow as tf returns:

\n
\nImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory\n
\n

Update

\n

When the import fails you can always downgrade CUDA to version 9.0 using following commands

\n
!wget https://developer.nvidia.com/compute/cuda/9.0/Prod/local_installers/cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64-deb\n!dpkg -i cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64-deb\n!apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub\n!apt-get update\n!apt-get install cuda=9.0.176-1\n
\n

You can check the version of CUDA by running:

\n
!nvcc --version\n
\n

Second update

\n

This code now seems to fail, see the follow-up question at How to downgrade to tensorflow-gpu version 1.12 in google colab

\n", + "system": "" + }, + { + "instruction": "Python: Neural Network - TypeError: 'History' object is not subscriptable", + "input": "", + "output": "

Call to model.fit() returns a History object that has a member history, which is of type dict.

\n\n

So you can replace :

\n\n
model2.fit(X, y, validation_split=0.33, epochs=30, callbacks= \n[early_stopping_monitor], verbose=False)\n
\n\n

with

\n\n
history2 = model2.fit(X, y, validation_split=0.33, epochs=30, callbacks= \n[early_stopping_monitor], verbose=False)\n
\n\n

Similarly for other models.

\n\n

and then you can use :

\n\n
plt.plot(history1.history['val_loss'], 'r', history2.history['val_loss'], 'b', \nhistory3.history['val_loss'], 'g')\n
\n", + "system": "" + }, + { + "instruction": "ModuleNotFoundError: No module named 'tensorflow.examples'", + "input": "", + "output": "

I think you should use like bellow on tensorflow 2

\n\n
import tensorflow_datasets\nmnist = tensorflow_datasets.load('mnist')\n
\n", + "system": "" + }, + { + "instruction": "Is there a better way to guess possible unknown variables without brute force than I am doing? Machine learning?", + "input": "", + "output": "

I hate to let you down but I really don't think a neural net will help at all for this problem, and IMO the best answer to your question is the advice \"don't waste your time trying neural nets\".

\n\n

An easy rule of thumb for deciding whether or not neural networks are applicable is to think, \"can an average adult human solve this problem reasonably well in a few seconds?\" For problems like \"what's in this image\", \"respond to this question\", or \"transcribe this audio clip\", the answer is yes. But for your problem, the answer is a most definite no.

\n\n

Neural networks have limitations, and one is that they don't deal well with highly logical problems. This is because the answers are generally not \"smooth\". If you take an image and slightly change a handful of pixels, the content of the image is still the same. If you take an audio clip and insert a few milliseconds of noise, a neural net will probably still be able to figure out what's said. But in your problem, change a single day's \"total basket value\" by only 1 unit, and your answer(s) will drastically change.

\n\n

It seems that the only way to solve your problem is with a \"classical\" algorithmic approach. As currently stated, there might not be any algorithm better than brute force, and it might not be possible to rule out much. For example, what if every day has the property that all fruits are priced the same? The count of each fruit can vary, as long as the total number of fruits is fixed, so the number of possibilities is still exponential in the number of fruits. If your goal is to \"produce a list of possibilities\", then no algorithm can be better than exponential time since this list can be exponentially large in some cases.

\n\n

It's interesting that part of your problem can be reduced to an integer linear program (ILP). Consider a single day, where you are given the basket total B and each fruit's cost c_i, for i=1 through i=n (if n is the total number of distinct fruits). Let's say the prices are large, so it's not obvious that you can \"fill up\" the basket with unit cost fruits. It can be hard in this situation to even find a single solution. Formulated as an ILP, this is equivalent to finding integer values of x_i such that:

\n\n
sum_i (x_i*c_i) = x_1*c_1 + x_2*c_2 + ... + x_n*c_n = B\n
\n\n

and x_i >= 0 for all 1 <= i <= n (can't have negative fruits), and sum_i x_i <= 100 (can have at most 100 fruits).

\n\n

The good news is that decent ILP solvers exist -- you can just hand over the above formulas and the solver will do its best to find a single solution. You can even add an \"objective function\" that the solver will maximize or minimize -- minimizing sum_i x_i has the effect of minimizing the total number of fruits in the basket. The bad news is that ILP is NP-complete, so there is almost no hope of finding an efficient solution for a large number of fruits (which equals the number of variables x_i).

\n\n

I think the best approach forward is to try the ILP approach, but also introduce some more constraints on the scenario. For example, what if all fruits had a different prime number cost? This has the nice property that if you find one solution, you can enumerate a bunch of other related solutions. If an apple costs m and an orange costs n, where m and n are relatively prime, then you can \"trade\" n*x apples for m*x oranges without changing the basket total, for any integer x>0 (so long as you have enough apples and oranges to begin with). If you choose all fruits to have different prime number costs, then all of the costs will be pairwise relatively prime. I think this approach will result in relatively few solutions for a given day.

\n\n

You might also consider other constraints, such as \"there can't be more than 5 fruits of a single kind in the basket\" (add the constraint x_i <= 5), or \"there can be at most 5 distinct kinds of fruits in the basket\" (but this is harder to encode as an ILP constraint). Adding these kinds of constraints will make it easier for the ILP solver to find a solution.

\n\n

Of course the above discussion is focused on a single day, and you have multiple days' worth of data. If the hardest part of the problem is finding any solution for any day at all (which happens if your prices are large), then using an ILP solver will give you a large boost. If solutions are easy to find (which happens if you have a very-low-cost fruit that can \"fill up\" your basket), and the hardest part of the problem is finding solutions that are \"consistent\" across multiple days, then the ILP approach might not be the best fit, and in general this problem seems much more difficult to reason about.

\n\n

Edit: and as mentioned in the comments, for some interpretations of the \"10% change\" constraint, you can even encode the entire multi-day problem as an ILP.

\n", + "system": "" + }, + { + "instruction": "Get the bounding box coordinates in the TensorFlow object detection API tutorial", + "input": "", + "output": "
\n

I tried printing output_dict['detection_boxes'] but I am not sure what\n the numbers mean

\n
\n\n

You can check out the code for yourself. visualize_boxes_and_labels_on_image_array is defined here.

\n\n

Note that you are passing use_normalized_coordinates=True. If you trace the function calls, you will see your numbers [ 0.56213236, 0.2780568 , 0.91445708, 0.69120586] etc. are the values [ymin, xmin, ymax, xmax] where the image coordinates:

\n\n
(left, right, top, bottom) = (xmin * im_width, xmax * im_width, \n                              ymin * im_height, ymax * im_height)\n
\n\n

are computed by the function:

\n\n
def draw_bounding_box_on_image(image,\n                           ymin,\n                           xmin,\n                           ymax,\n                           xmax,\n                           color='red',\n                           thickness=4,\n                           display_str_list=(),\n                           use_normalized_coordinates=True):\n  \"\"\"Adds a bounding box to an image.\n  Bounding box coordinates can be specified in either absolute (pixel) or\n  normalized coordinates by setting the use_normalized_coordinates argument.\n  Each string in display_str_list is displayed on a separate line above the\n  bounding box in black text on a rectangle filled with the input 'color'.\n  If the top of the bounding box extends to the edge of the image, the strings\n  are displayed below the bounding box.\n  Args:\n    image: a PIL.Image object.\n    ymin: ymin of bounding box.\n    xmin: xmin of bounding box.\n    ymax: ymax of bounding box.\n    xmax: xmax of bounding box.\n    color: color to draw bounding box. Default is red.\n    thickness: line thickness. Default value is 4.\n    display_str_list: list of strings to display in box\n                      (each to be shown on its own line).\n    use_normalized_coordinates: If True (default), treat coordinates\n      ymin, xmin, ymax, xmax as relative to the image.  Otherwise treat\n      coordinates as absolute.\n  \"\"\"\n  draw = ImageDraw.Draw(image)\n  im_width, im_height = image.size\n  if use_normalized_coordinates:\n    (left, right, top, bottom) = (xmin * im_width, xmax * im_width,\n                                  ymin * im_height, ymax * im_height)\n
\n", + "system": "" + }, + { + "instruction": "ValueError: Variable rnn/basic_rnn_cell/kernel already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope?", + "input": "", + "output": "

Does this happen when you run the model for the first time (upon opening a new python console)?

\n\n

If not, you need to clear you computational graph. You can do that by putting this line at the beginning of your script.

\n\n
tf.reset_default_graph()\n
\n", + "system": "" + }, + { + "instruction": "Building a mutlivariate, multi-task LSTM with Keras", + "input": "", + "output": "

So:

\n\n
\n

Firstly, how would I slice up my data for the batches? Since I have\n three full years, does it make sense to simply push through three\n batches, each time of size one year? Or does it make more sense to\n make smaller batches (say 30 days) and also to using sliding windows?\n I.e. instead of 36 batches of 30 days each, I use 36 * 6 batches of 30\n days each, each time sliding with 5 days? Or is this not really the\n way LSTMs should be used? (Note that there is quite a bit of\n seasonality in the data, to I need to catch that kind of long-term\n trend as well).

\n
\n\n

Honestly - modeling such data is something really hard. First of all - I wouldn't advise you to use LSTMs as they are rather designed to capture a little bit different kind of data (e.g. NLP or speech where it's really important to model long-term dependencies - not seasonality) and they need a lot of data in order to be learned. I would rather advise you to use either GRU or SimpleRNN which are way easier to learn and should be better for your task.

\n\n

When it comes to batching - I would definitely advise you to use a fixed window technique as it will end up in producing way more data points than feeding a whole year or a whole month. Try to set a number of days as meta parameter which will be also optimized by using different values in training and choosing the most suitable one.

\n\n

When it comes to seasonality - of course, this is a case but:

\n\n\n\n

What I advise you to do instead is:

\n\n\n\n
\n

Secondly, does it make sense to use return_sequences=True here? In\n other words, I keep my Y data as is (50, 1096, 3) so that (as far as\n I've understood it) there is a prediction at every time step for which\n a loss can be calculated against the target data? Or would I be better\n off with return_sequences=False, so that only the final value of each\n batch is used to evaluate the loss (i.e. if using yearly batches, then\n in 2016 for product 1, we evaluate against the Dec 2016 value of\n (1,1,1)).

\n
\n\n

Using return_sequences=True might be useful but only in following cases:

\n\n
    \n
  1. When a given LSTM (or another recurrent layer) will be followed by yet another recurrent layer.
  2. \n
  3. In a scenario - when you feed a shifted original series as an output by what you are simultaneously learning a model in different time windows, etc.
  4. \n
\n\n

The way described in a second point might be an interesting approach but keep the mind in mind that it might be a little bit hard to implement as you will need to rewrite your model in order to obtain a production result. What also might be harder is that you'll need to test your model against many types of time instabilities - and such approach might make this totally unfeasible.

\n\n
\n

Thirdly how should I deal with the 50 different products? They are\n different, but still strongly correlated and we've seen with other\n approaches (for example an MLP with simple time-windows) that the\n results are better when all products are considered in the same model.\n Some ideas that are currently on the table are:

\n \n \n
\n\n

I would definitely go for a first choice but before providing a detailed explanation I will discuss disadvantages of 2nd and 3rd ones:

\n\n\n\n

Before getting to my choice - let's discuss yet another issue - redundancies in your dataset. I guess that you have 3 kinds of features:

\n\n\n\n

Now you have table of size (timesteps, m * n, products). I would transform it into table of shape (timesteps, products * m + n) as general features are the same for all products. This will save you a lot of memory and also make it feasible to feed to recurrent network (keep in mind that recurrent layers in keras have only one feature dimension - whereas you had two - product and feature ones).

\n\n

So why the first approach is the best in my opinion? Becasue it takes advantage of many interesting dependencies from data. Of course - this might harm the training process - but there is an easy trick to overcome this: dimensionality reduction. You could e.g. train PCA on your 150 dimensional vector and reduce it size to a much smaller one - thanks to what you have your dependencies modeled by PCA and your output has a much more feasible size.

\n\n
\n

Fourthly, how do I deal with validation data? Normally I would just\n keep out a randomly selected sample to validate against, but here we\n need to keep the time ordering in place. So I guess the best is to\n just keep a few months aside?

\n
\n\n

This is a really important question. From my experience - you need to test your solution against many types of instabilities in order to be sure that it works fine. So a few rules which you should keep in mind:

\n\n\n\n

The last point might be a little bit vague - so to provide you some examples:

\n\n\n\n

Of course - you could try yet another hold outs.

\n\n
\n

Fifthly, and this is the part that is probably the most unclear to me\n - how can I use the actual results to perform predictions? Let's say I used return_sequences=False and I trained on all three years in three\n batches (each time up to Nov) with the goal of training the model to\n predict the next value (Dec 2014, Dec 2015, Dec 2016). If I want to\n use these results in 2017, how does this actually work? If I\n understood it correctly, the only thing I can do in this instance is\n to then feed the model all the data points for Jan to Nov 2017 and it\n will give me back a prediction for Dec 2017. Is that correct? However,\n if I were to use return_sequences=True, then trained on all data up to\n Dec 2016, would I then be able to get a prediction for Jan 2017 just\n by giving the model the features observed at Jan 2017? Or do I need to\n also give it the 12 months before Jan 2017? What about Feb 2017, do I\n in addition need to give the value for 2017, plus a further 11 months\n before that? (If it sounds like I'm confused, it's because I am!)

\n
\n\n

This depends on how you've built your model:

\n\n\n\n

Here - more info on what kind of model you've choosed is needed.

\n", + "system": "" + }, + { + "instruction": "What is the difference between [], [None], None and () for the shape of a placeholder?", + "input": "", + "output": "

TensorFlow uses arrays rather than tuples. It converts tuples to arrays. Therefore [] and () are equivalent.

\n\n

Now, consider this code example:

\n\n
x = tf.placeholder(dtype=tf.int32, shape=[], name=\"foo1\")\ny = tf.placeholder(dtype=tf.int32, shape=[None], name=\"foo2\")\nz = tf.placeholder(dtype=tf.int32, shape=None, name=\"foo3\")\n\nval1 = np.array((1, 2, 3))\nval2 = 45\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n\n    #print(sess.run(x, feed_dict = {x: val1}))  # Fails\n    print(sess.run(y, feed_dict = {y: val1}))\n    print(sess.run(z, feed_dict = {z: val1}))\n\n    print(sess.run(x, feed_dict = {x: val2}))\n    #print(sess.run(y, feed_dict = {y: val2}))  # Fails\n    print(sess.run(z, feed_dict = {z: val2}))\n
\n\n

As can be seen, placeholder with [] shape takes a single scalar value directly. Placeholder with [None] shape takes a 1-dimensional array and placeholder with None shape can take in any value while computation takes place.

\n", + "system": "" + }, + { + "instruction": "Keras input explanation: input_shape, units, batch_size, dim, etc", + "input": "", + "output": "

Units:

\n\n
\n

The amount of \"neurons\", or \"cells\", or whatever the layer has inside it.

\n
\n\n

It's a property of each layer, and yes, it's related to the output shape (as we will see later). In your picture, except for the input layer, which is conceptually different from other layers, you have:

\n\n\n\n

Shapes

\n\n

Shapes are consequences of the model's configuration. Shapes are tuples representing how many elements an array or tensor has in each dimension.

\n\n

Ex: a shape (30,4,10) means an array or tensor with 3 dimensions, containing 30 elements in the first dimension, 4 in the second and 10 in the third, totaling 30*4*10 = 1200 elements or numbers.

\n\n

The input shape

\n\n

What flows between layers are tensors. Tensors can be seen as matrices, with shapes.

\n\n

In Keras, the input layer itself is not a layer, but a tensor. It's the starting tensor you send to the first hidden layer. This tensor must have the same shape as your training data.

\n\n

Example: if you have 30 images of 50x50 pixels in RGB (3 channels), the shape of your input data is (30,50,50,3). Then your input layer tensor, must have this shape (see details in the \"shapes in keras\" section).

\n\n

Each type of layer requires the input with a certain number of dimensions:

\n\n\n\n

Now, the input shape is the only one you must define, because your model cannot know it. Only you know that, based on your training data.

\n\n

All the other shapes are calculated automatically based on the units and particularities of each layer.

\n\n

Relation between shapes and units - The output shape

\n\n

Given the input shape, all other shapes are results of layers calculations.

\n\n

The \"units\" of each layer will define the output shape (the shape of the tensor that is produced by the layer and that will be the input of the next layer).

\n\n

Each type of layer works in a particular way. Dense layers have output shape based on \"units\", convolutional layers have output shape based on \"filters\". But it's always based on some layer property. (See the documentation for what each layer outputs)

\n\n

Let's show what happens with \"Dense\" layers, which is the type shown in your graph.

\n\n

A dense layer has an output shape of (batch_size,units). So, yes, units, the property of the layer, also defines the output shape.

\n\n\n\n

Weights

\n\n

Weights will be entirely automatically calculated based on the input and the output shapes. Again, each type of layer works in a certain way. But the weights will be a matrix capable of transforming the input shape into the output shape by some mathematical operation.

\n\n

In a dense layer, weights multiply all inputs. It's a matrix with one column per input and one row per unit, but this is often not important for basic works.

\n\n

In the image, if each arrow had a multiplication number on it, all numbers together would form the weight matrix.

\n\n

Shapes in Keras

\n\n

Earlier, I gave an example of 30 images, 50x50 pixels and 3 channels, having an input shape of (30,50,50,3).

\n\n

Since the input shape is the only one you need to define, Keras will demand it in the first layer.

\n\n

But in this definition, Keras ignores the first dimension, which is the batch size. Your model should be able to deal with any batch size, so you define only the other dimensions:

\n\n
input_shape = (50,50,3)\n    #regardless of how many images I have, each image has this shape        \n
\n\n

Optionally, or when it's required by certain kinds of models, you can pass the shape containing the batch size via batch_input_shape=(30,50,50,3) or batch_shape=(30,50,50,3). This limits your training possibilities to this unique batch size, so it should be used only when really required.

\n\n

Either way you choose, tensors in the model will have the batch dimension.

\n\n

So, even if you used input_shape=(50,50,3), when keras sends you messages, or when you print the model summary, it will show (None,50,50,3).

\n\n

The first dimension is the batch size, it's None because it can vary depending on how many examples you give for training. (If you defined the batch size explicitly, then the number you defined will appear instead of None)

\n\n

Also, in advanced works, when you actually operate directly on the tensors (inside Lambda layers or in the loss function, for instance), the batch size dimension will be there.

\n\n\n\n

Dim

\n\n

And in the end, what is dim?

\n\n

If your input shape has only one dimension, you don't need to give it as a tuple, you give input_dim as a scalar number.

\n\n

So, in your model, where your input layer has 3 elements, you can use any of these two:

\n\n\n\n

But when dealing directly with the tensors, often dim will refer to how many dimensions a tensor has. For instance a tensor with shape (25,10909) has 2 dimensions.

\n\n
\n\n

Defining your image in Keras

\n\n

Keras has two ways of doing it, Sequential models, or the functional API Model. I don't like using the sequential model, later you will have to forget it anyway because you will want models with branches.

\n\n

PS: here I ignored other aspects, such as activation functions.

\n\n

With the Sequential model:

\n\n
from keras.models import Sequential  \nfrom keras.layers import *  \n\nmodel = Sequential()    \n\n#start from the first hidden layer, since the input is not actually a layer   \n#but inform the shape of the input, with 3 elements.    \nmodel.add(Dense(units=4,input_shape=(3,))) #hidden layer 1 with input\n\n#further layers:    \nmodel.add(Dense(units=4)) #hidden layer 2\nmodel.add(Dense(units=1)) #output layer   \n
\n\n

With the functional API Model:

\n\n
from keras.models import Model   \nfrom keras.layers import * \n\n#Start defining the input tensor:\ninpTensor = Input((3,))   \n\n#create the layers and pass them the input tensor to get the output tensor:    \nhidden1Out = Dense(units=4)(inpTensor)    \nhidden2Out = Dense(units=4)(hidden1Out)    \nfinalOut = Dense(units=1)(hidden2Out)   \n\n#define the model's start and end points    \nmodel = Model(inpTensor,finalOut)\n
\n\n

Shapes of the tensors

\n\n

Remember you ignore batch sizes when defining layers:

\n\n\n", + "system": "" + }, + { + "instruction": "Understanding Keras Long Short Term Memories (LSTMs)", + "input": "", + "output": "

As a complement to the accepted answer, this answer shows keras behaviors and how to achieve each picture.

\n\n

General Keras behavior

\n\n

The standard keras internal processing is always a many to many as in the following picture (where I used features=2, pressure and temperature, just as an example):

\n\n

\"ManyToMany\"

\n\n

In this image, I increased the number of steps to 5, to avoid confusion with the other dimensions.

\n\n

For this example:

\n\n\n\n

Our input array should then be something shaped as (N,5,2):

\n\n
        [     Step1      Step2      Step3      Step4      Step5\nTank A:    [[Pa1,Ta1], [Pa2,Ta2], [Pa3,Ta3], [Pa4,Ta4], [Pa5,Ta5]],\nTank B:    [[Pb1,Tb1], [Pb2,Tb2], [Pb3,Tb3], [Pb4,Tb4], [Pb5,Tb5]],\n  ....\nTank N:    [[Pn1,Tn1], [Pn2,Tn2], [Pn3,Tn3], [Pn4,Tn4], [Pn5,Tn5]],\n        ]\n
\n\n

Inputs for sliding windows

\n\n

Often, LSTM layers are supposed to process the entire sequences. Dividing windows may not be the best idea. The layer has internal states about how a sequence is evolving as it steps forward. Windows eliminate the possibility of learning long sequences, limiting all sequences to the window size.

\n\n

In windows, each window is part of a long original sequence, but by Keras they will be seen each as an independent sequence:

\n\n
        [     Step1    Step2    Step3    Step4    Step5\nWindow  A:  [[P1,T1], [P2,T2], [P3,T3], [P4,T4], [P5,T5]],\nWindow  B:  [[P2,T2], [P3,T3], [P4,T4], [P5,T5], [P6,T6]],\nWindow  C:  [[P3,T3], [P4,T4], [P5,T5], [P6,T6], [P7,T7]],\n  ....\n        ]\n
\n\n

Notice that in this case, you have initially only one sequence, but you're dividing it in many sequences to create windows.

\n\n

The concept of \"what is a sequence\" is abstract. The important parts are:

\n\n\n\n

Achieving each case with \"single layers\"

\n\n

Achieving standard many to many:

\n\n

\"StandardManyToMany\"

\n\n

You can achieve many to many with a simple LSTM layer, using return_sequences=True:

\n\n
outputs = LSTM(units, return_sequences=True)(inputs)\n\n#output_shape -> (batch_size, steps, units)\n
\n\n

Achieving many to one:

\n\n

Using the exact same layer, keras will do the exact same internal preprocessing, but when you use return_sequences=False (or simply ignore this argument), keras will automatically discard the steps previous to the last:

\n\n

\"ManyToOne\"

\n\n
outputs = LSTM(units)(inputs)\n\n#output_shape -> (batch_size, units) --> steps were discarded, only the last was returned\n
\n\n

Achieving one to many

\n\n

Now, this is not supported by keras LSTM layers alone. You will have to create your own strategy to multiplicate the steps. There are two good approaches:

\n\n\n\n

One to many with repeat vector

\n\n

In order to fit to keras standard behavior, we need inputs in steps, so, we simply repeat the inputs for the length we want:

\n\n

\"OneToManyRepeat\"

\n\n
outputs = RepeatVector(steps)(inputs) #where inputs is (batch,features)\noutputs = LSTM(units,return_sequences=True)(outputs)\n\n#output_shape -> (batch_size, steps, units)\n
\n\n

Understanding stateful = True

\n\n

Now comes one of the possible usages of stateful=True (besides avoiding loading data that can't fit your computer's memory at once)

\n\n

Stateful allows us to input \"parts\" of the sequences in stages. The difference is:

\n\n\n\n

It's like dividing the sequences in windows too, with these two main differences:

\n\n\n\n

In stateful=True, every new batch will be interpreted as continuing the previous batch (until you call model.reset_states()).

\n\n\n\n

Example of inputs, batch 1 contains steps 1 and 2, batch 2 contains steps 3 to 5:

\n\n
                   BATCH 1                           BATCH 2\n        [     Step1      Step2        |    [    Step3      Step4      Step5\nTank A:    [[Pa1,Ta1], [Pa2,Ta2],     |       [Pa3,Ta3], [Pa4,Ta4], [Pa5,Ta5]],\nTank B:    [[Pb1,Tb1], [Pb2,Tb2],     |       [Pb3,Tb3], [Pb4,Tb4], [Pb5,Tb5]],\n  ....                                |\nTank N:    [[Pn1,Tn1], [Pn2,Tn2],     |       [Pn3,Tn3], [Pn4,Tn4], [Pn5,Tn5]],\n        ]                                  ]\n
\n\n

Notice the alignment of tanks in batch 1 and batch 2! That's why we need shuffle=False (unless we are using only one sequence, of course).

\n\n

You can have any number of batches, indefinitely. (For having variable lengths in each batch, use input_shape=(None,features).

\n\n

One to many with stateful=True

\n\n

For our case here, we are going to use only 1 step per batch, because we want to get one output step and make it be an input.

\n\n

Please notice that the behavior in the picture is not \"caused by\" stateful=True. We will force that behavior in a manual loop below. In this example, stateful=True is what \"allows\" us to stop the sequence, manipulate what we want, and continue from where we stopped.

\n\n

\"OneToManyStateful\"

\n\n

Honestly, the repeat approach is probably a better choice for this case. But since we're looking into stateful=True, this is a good example. The best way to use this is the next \"many to many\" case.

\n\n

Layer:

\n\n
outputs = LSTM(units=features, \n               stateful=True, \n               return_sequences=True, #just to keep a nice output shape even with length 1\n               input_shape=(None,features))(inputs) \n    #units = features because we want to use the outputs as inputs\n    #None because we want variable length\n\n#output_shape -> (batch_size, steps, units) \n
\n\n

Now, we're going to need a manual loop for predictions:

\n\n
input_data = someDataWithShape((batch, 1, features))\n\n#important, we're starting new sequences, not continuing old ones:\nmodel.reset_states()\n\noutput_sequence = []\nlast_step = input_data\nfor i in steps_to_predict:\n\n    new_step = model.predict(last_step)\n    output_sequence.append(new_step)\n    last_step = new_step\n\n #end of the sequences\n model.reset_states()\n
\n\n

Many to many with stateful=True

\n\n

Now, here, we get a very nice application: given an input sequence, try to predict its future unknown steps.

\n\n

We're using the same method as in the \"one to many\" above, with the difference that:

\n\n\n\n

\"ManyToManyStateful\"

\n\n

Layer (same as above):

\n\n
outputs = LSTM(units=features, \n               stateful=True, \n               return_sequences=True, \n               input_shape=(None,features))(inputs) \n    #units = features because we want to use the outputs as inputs\n    #None because we want variable length\n\n#output_shape -> (batch_size, steps, units) \n
\n\n

Training:

\n\n

We are going to train our model to predict the next step of the sequences:

\n\n
totalSequences = someSequencesShaped((batch, steps, features))\n    #batch size is usually 1 in these cases (often you have only one Tank in the example)\n\nX = totalSequences[:,:-1] #the entire known sequence, except the last step\nY = totalSequences[:,1:] #one step ahead of X\n\n#loop for resetting states at the start/end of the sequences:\nfor epoch in range(epochs):\n    model.reset_states()\n    model.train_on_batch(X,Y)\n
\n\n

Predicting:

\n\n

The first stage of our predicting involves \"ajusting the states\". That's why we're going to predict the entire sequence again, even if we already know this part of it:

\n\n
model.reset_states() #starting a new sequence\npredicted = model.predict(totalSequences)\nfirstNewStep = predicted[:,-1:] #the last step of the predictions is the first future step\n
\n\n

Now we go to the loop as in the one to many case. But don't reset states here!. We want the model to know in which step of the sequence it is (and it knows it's at the first new step because of the prediction we just made above)

\n\n
output_sequence = [firstNewStep]\nlast_step = firstNewStep\nfor i in steps_to_predict:\n\n    new_step = model.predict(last_step)\n    output_sequence.append(new_step)\n    last_step = new_step\n\n #end of the sequences\n model.reset_states()\n
\n\n

This approach was used in these answers and file:

\n\n\n\n

Achieving complex configurations

\n\n

In all examples above, I showed the behavior of \"one layer\".

\n\n

You can, of course, stack many layers on top of each other, not necessarly all following the same pattern, and create your own models.

\n\n

One interesting example that has been appearing is the \"autoencoder\" that has a \"many to one encoder\" followed by a \"one to many\" decoder:

\n\n

Encoder:

\n\n
inputs = Input((steps,features))\n\n#a few many to many layers:\noutputs = LSTM(hidden1,return_sequences=True)(inputs)\noutputs = LSTM(hidden2,return_sequences=True)(outputs)    \n\n#many to one layer:\noutputs = LSTM(hidden3)(outputs)\n\nencoder = Model(inputs,outputs)\n
\n\n

Decoder:

\n\n

Using the \"repeat\" method;

\n\n
inputs = Input((hidden3,))\n\n#repeat to make one to many:\noutputs = RepeatVector(steps)(inputs)\n\n#a few many to many layers:\noutputs = LSTM(hidden4,return_sequences=True)(outputs)\n\n#last layer\noutputs = LSTM(features,return_sequences=True)(outputs)\n\ndecoder = Model(inputs,outputs)\n
\n\n

Autoencoder:

\n\n
inputs = Input((steps,features))\noutputs = encoder(inputs)\noutputs = decoder(outputs)\n\nautoencoder = Model(inputs,outputs)\n
\n\n

Train with fit(X,X)

\n\n

Additional explanations

\n\n

If you want details about how steps are calculated in LSTMs, or details about the stateful=True cases above, you can read more in this answer: Doubts regarding `Understanding Keras LSTMs`

\n", + "system": "" + }, + { + "instruction": "Why binary_crossentropy and categorical_crossentropy give different performances for the same problem?", + "input": "", + "output": "

The reason for this apparent performance discrepancy between categorical & binary cross entropy is what user xtof54 has already reported in his answer below, i.e.:

\n
\n

the accuracy computed with the Keras method evaluate is just plain\nwrong when using binary_crossentropy with more than 2 labels

\n
\n

I would like to elaborate more on this, demonstrate the actual underlying issue, explain it, and offer a remedy.

\n

This behavior is not a bug; the underlying reason is a rather subtle & undocumented issue at how Keras actually guesses which accuracy to use, depending on the loss function you have selected, when you include simply metrics=['accuracy'] in your model compilation. In other words, while your first compilation option

\n\n
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n
\n

is valid, your second one:

\n
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n
\n

will not produce what you expect, but the reason is not the use of binary cross entropy (which, at least in principle, is an absolutely valid loss function).

\n

Why is that? If you check the metrics source code, Keras does not define a single accuracy metric, but several different ones, among them binary_accuracy and categorical_accuracy. What happens under the hood is that, since you have selected binary cross entropy as your loss function and have not specified a particular accuracy metric, Keras (wrongly...) infers that you are interested in the binary_accuracy, and this is what it returns - while in fact you are interested in the categorical_accuracy.

\n

Let's verify that this is the case, using the MNIST CNN example in Keras, with the following modification:

\n
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])  # WRONG way\n\nmodel.fit(x_train, y_train,\n          batch_size=batch_size,\n          epochs=2,  # only 2 epochs, for demonstration purposes\n          verbose=1,\n          validation_data=(x_test, y_test))\n\n# Keras reported accuracy:\nscore = model.evaluate(x_test, y_test, verbose=0) \nscore[1]\n# 0.9975801164627075\n\n# Actual accuracy calculated manually:\nimport numpy as np\ny_pred = model.predict(x_test)\nacc = sum([np.argmax(y_test[i])==np.argmax(y_pred[i]) for i in range(10000)])/10000\nacc\n# 0.98780000000000001\n\nscore[1]==acc\n# False    \n
\n

To remedy this, i.e. to use indeed binary cross entropy as your loss function (as I said, nothing wrong with this, at least in principle) while still getting the categorical accuracy required by the problem at hand, you should ask explicitly for categorical_accuracy in the model compilation as follows:

\n
from keras.metrics import categorical_accuracy\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=[categorical_accuracy])\n
\n

In the MNIST example, after training, scoring, and predicting the test set as I show above, the two metrics now are the same, as they should be:

\n
# Keras reported accuracy:\nscore = model.evaluate(x_test, y_test, verbose=0) \nscore[1]\n# 0.98580000000000001\n\n# Actual accuracy calculated manually:\ny_pred = model.predict(x_test)\nacc = sum([np.argmax(y_test[i])==np.argmax(y_pred[i]) for i in range(10000)])/10000\nacc\n# 0.98580000000000001\n\nscore[1]==acc\n# True    \n
\n

System setup:

\n
Python version 3.5.3\nTensorflow version 1.2.1\nKeras version 2.0.4\n
\n

UPDATE: After my post, I discovered that this issue had already been identified in this answer.

\n", + "system": "" + }, + { + "instruction": "Where do I call the BatchNormalization function in Keras?", + "input": "", + "output": "

As Pavel said, Batch Normalization is just another layer, so you can use it as such to create your desired network architecture.

\n

The general use case is to use BN between the linear and non-linear layers in your network, because it normalizes the input to your activation function, so that you're centered in the linear section of the activation function (such as Sigmoid). There's a small discussion of it here

\n

In your case above, this might look like:

\n
# import BatchNormalization\nfrom keras.layers.normalization import BatchNormalization\n\n# instantiate model\nmodel = Sequential()\n\n# we can think of this chunk as the input layer\nmodel.add(Dense(64, input_dim=14, init='uniform'))\nmodel.add(BatchNormalization())\nmodel.add(Activation('tanh'))\nmodel.add(Dropout(0.5))\n\n# we can think of this chunk as the hidden layer    \nmodel.add(Dense(64, init='uniform'))\nmodel.add(BatchNormalization())\nmodel.add(Activation('tanh'))\nmodel.add(Dropout(0.5))\n\n# we can think of this chunk as the output layer\nmodel.add(Dense(2, init='uniform'))\nmodel.add(BatchNormalization())\nmodel.add(Activation('softmax'))\n\n# setting up the optimization of our weights \nsgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)\nmodel.compile(loss='binary_crossentropy', optimizer=sgd)\n\n# running the fitting\nmodel.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2)\n
\n", + "system": "" + }, + { + "instruction": "What is the use of verbose in Keras while validating the model?", + "input": "", + "output": "

Check documentation for model.fit here.

\n\n

By setting verbose 0, 1 or 2 you just say how do you want to 'see' the training progress for each epoch.

\n\n

verbose=0 will show you nothing (silent)

\n\n

verbose=1 will show you an animated progress bar like this:

\n\n

\"progres_bar\"

\n\n

verbose=2 will just mention the number of epoch like this:

\n\n

\"enter

\n", + "system": "" + }, + { + "instruction": "How to fix 'Object arrays cannot be loaded when allow_pickle=False' for imdb.load_data() function?", + "input": "", + "output": "

Here's a trick to force imdb.load_data to allow pickle by, in your notebook, replacing this line:

\n\n
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)\n
\n\n

by this:

\n\n
import numpy as np\n# save np.load\nnp_load_old = np.load\n\n# modify the default parameters of np.load\nnp.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)\n\n# call load_data with allow_pickle implicitly set to true\n(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)\n\n# restore np.load for future normal usage\nnp.load = np_load_old\n
\n", + "system": "" + }, + { + "instruction": "Many to one and many to many LSTM examples in Keras", + "input": "", + "output": "\n

So:

\n
    \n
  1. One-to-one: you could use a Dense layer as you are not processing sequences:

    \n
    model.add(Dense(output_size, input_shape=input_shape))\n
    \n
  2. \n
  3. One-to-many: this option is not supported well as chaining models is not very easy in Keras, so the following version is the easiest one:

    \n
    model.add(RepeatVector(number_of_times, input_shape=input_shape))\nmodel.add(LSTM(output_size, return_sequences=True))\n
    \n
  4. \n
  5. Many-to-one: actually, your code snippet is (almost) an example of this approach:

    \n
    model = Sequential()\nmodel.add(LSTM(1, input_shape=(timesteps, data_dim)))\n
    \n
  6. \n
  7. Many-to-many: This is the easiest snippet when the length of the input and output matches the number of recurrent steps:

    \n
    model = Sequential()\nmodel.add(LSTM(1, input_shape=(timesteps, data_dim), return_sequences=True))\n
    \n
  8. \n
  9. Many-to-many when number of steps differ from input/output length: this is freaky hard in Keras. There are no easy code snippets to code that.

    \n
  10. \n
\n

EDIT: Ad 5

\n

In one of my recent applications, we implemented something which might be similar to many-to-many from the 4th image. In case you want to have a network with the following architecture (when an input is longer than the output):

\n
                                        O O O\n                                        | | |\n                                  O O O O O O\n                                  | | | | | | \n                                  O O O O O O\n
\n

You could achieve this in the following manner:

\n
model = Sequential()\nmodel.add(LSTM(1, input_shape=(timesteps, data_dim), return_sequences=True))\nmodel.add(Lambda(lambda x: x[:, -N:, :])) #Select last N from output\n
\n

Where N is the number of last steps you want to cover (on image N = 3).

\n

From this point getting to:

\n
                                        O O O\n                                        | | |\n                                  O O O O O O\n                                  | | | \n                                  O O O \n
\n

is as simple as artificial padding sequence of length N using e.g. with 0 vectors, in order to adjust it to an appropriate size.

\n", + "system": "" + }, + { + "instruction": "How do I use the Tensorboard callback of Keras?", + "input": "", + "output": "
keras.callbacks.TensorBoard(log_dir='./Graph', histogram_freq=0,  \n          write_graph=True, write_images=True)\n
\n

This line creates a Callback Tensorboard object, you should capture that object and give it to the fit function of your model.

\n
tbCallBack = keras.callbacks.TensorBoard(log_dir='./Graph', histogram_freq=0, write_graph=True, write_images=True)\n...\nmodel.fit(...inputs and parameters..., callbacks=[tbCallBack])\n
\n

This way you gave your callback object to the function. It will be run during the training and will output files that can be used with tensorboard.

\n

If you want to visualize the files created during training, run in your terminal

\n
tensorboard --logdir path_to_current_dir/Graph \n
\n", + "system": "" + }, + { + "instruction": "NaN loss when training regression network", + "input": "", + "output": "

Regression with neural networks is hard to get working because the output is unbounded, so you are especially prone to the exploding gradients problem (the likely cause of the nans).

\n\n

Historically, one key solution to exploding gradients was to reduce the learning rate, but with the advent of per-parameter adaptive learning rate algorithms like Adam, you no longer need to set a learning rate to get good performance. There is very little reason to use SGD with momentum anymore unless you're a neural network fiend and know how to tune the learning schedule.

\n\n

Here are some things you could potentially try:

\n\n
    \n
  1. Normalize your outputs by quantile normalizing or z scoring. To be rigorous, compute this transformation on the training data, not on the entire dataset. For example, with quantile normalization, if an example is in the 60th percentile of the training set, it gets a value of 0.6. (You can also shift the quantile normalized values down by 0.5 so that the 0th percentile is -0.5 and the 100th percentile is +0.5).

  2. \n
  3. Add regularization, either by increasing the dropout rate or adding L1 and L2 penalties to the weights. L1 regularization is analogous to feature selection, and since you said that reducing the number of features to 5 gives good performance, L1 may also.

  4. \n
  5. If these still don't help, reduce the size of your network. This is not always the best idea since it can harm performance, but in your case you have a large number of first-layer neurons (1024) relative to input features (35) so it may help.

  6. \n
  7. Increase the batch size from 32 to 128. 128 is fairly standard and could potentially increase the stability of the optimization.

  8. \n
\n", + "system": "" + }, + { + "instruction": "How to load a model from an HDF5 file in Keras?", + "input": "", + "output": "

If you stored the complete model, not only the weights, in the HDF5 file, then it is as simple as

\n\n
from keras.models import load_model\nmodel = load_model('model.h5')\n
\n", + "system": "" + }, + { + "instruction": "keras: how to save the training history attribute of the history object", + "input": "", + "output": "

What I use is the following:

\n
with open('/trainHistoryDict', 'wb') as file_pi:\n    pickle.dump(history.history, file_pi)\n
\n

In this way I save the history as a dictionary in case I want to plot the loss or accuracy later on. Later, when you want to load the history again, you can use:

\n
with open('/trainHistoryDict', "rb") as file_pi:\n    history = pickle.load(file_pi)\n
\n

Why choose pickle over json?

\n

The comment under this answer accurately states:

\n
\n

[Storing the history as json] does not work anymore in tensorflow keras. I had issues with: TypeError: Object of type 'float32' is not JSON serializable.

\n
\n

There are ways to tell json how to encode numpy objects, which you can learn about from this other question, so there's nothing wrong with using json in this case, it's just more complicated than simply dumping to a pickle file.

\n", + "system": "" + }, + { + "instruction": "What is the role of TimeDistributed layer in Keras?", + "input": "", + "output": "

In keras - while building a sequential model - usually the second dimension (one after sample dimension) - is related to a time dimension. This means that if for example, your data is 5-dim with (sample, time, width, length, channel) you could apply a convolutional layer using TimeDistributed (which is applicable to 4-dim with (sample, width, length, channel)) along a time dimension (applying the same layer to each time slice) in order to obtain 5-d output.

\n\n

The case with Dense is that in keras from version 2.0 Dense is by default applied to only last dimension (e.g. if you apply Dense(10) to input with shape (n, m, o, p) you'll get output with shape (n, m, o, 10)) so in your case Dense and TimeDistributed(Dense) are equivalent.

\n", + "system": "" + }, + { + "instruction": "How to concatenate two layers in keras?", + "input": "", + "output": "

You're getting the error because result defined as Sequential() is just a container for the model and you have not defined an input for it.

\n\n

Given what you're trying to build set result to take the third input x3.

\n
first = Sequential()\nfirst.add(Dense(1, input_shape=(2,), activation='sigmoid'))\n\nsecond = Sequential()\nsecond.add(Dense(1, input_shape=(1,), activation='sigmoid'))\n\nthird = Sequential()\n# of course you must provide the input to result which will be your x3\nthird.add(Dense(1, input_shape=(1,), activation='sigmoid'))\n\n# lets say you add a few more layers to first and second.\n# concatenate them\nmerged = Concatenate([first, second])\n\n# then concatenate the two outputs\n\nresult = Concatenate([merged,  third])\n\nada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0)\n\nresult.compile(optimizer=ada_grad, loss='binary_crossentropy',\n               metrics=['accuracy'])\n
\n

However, my preferred way of building a model that has this type of input structure would be to use the functional api.

\n

Here is an implementation of your requirements to get you started:

\n
from keras.models import Model\nfrom keras.layers import Concatenate, Dense, LSTM, Input, concatenate\nfrom keras.optimizers import Adagrad\n\nfirst_input = Input(shape=(2, ))\nfirst_dense = Dense(1, )(first_input)\n\nsecond_input = Input(shape=(2, ))\nsecond_dense = Dense(1, )(second_input)\n\nmerge_one = concatenate([first_dense, second_dense])\n\nthird_input = Input(shape=(1, ))\nmerge_two = concatenate([merge_one, third_input])\n\nmodel = Model(inputs=[first_input, second_input, third_input], outputs=merge_two)\nada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0)\nmodel.compile(optimizer=ada_grad, loss='binary_crossentropy',\n               metrics=['accuracy'])\n
\n

To answer the question in the comments:

\n
    \n
  1. How are result and merged connected? Assuming you mean how are they concatenated.
  2. \n
\n

Concatenation works like this:

\n
  a        b         c\na b c   g h i    a b c g h i\nd e f   j k l    d e f j k l\n
\n

i.e rows are just joined.

\n
    \n
  1. Now, x1 is input to first, x2 is input into second and x3 input into third.
  2. \n
\n", + "system": "" + }, + { + "instruction": "What does Keras Tokenizer method exactly do?", + "input": "", + "output": "

From the source code:

\n\n
    \n
  1. fit_on_texts Updates internal vocabulary based on a list of texts. This method creates the vocabulary index based on word frequency. So if you give it something like, \"The cat sat on the mat.\" It will create a dictionary s.t. word_index[\"the\"] = 1; word_index[\"cat\"] = 2 it is word -> index dictionary so every word gets a unique integer value. 0 is reserved for padding. So lower integer means more frequent word (often the first few are stop words because they appear a lot).
  2. \n
  3. texts_to_sequences Transforms each text in texts to a sequence of integers. So it basically takes each word in the text and replaces it with its corresponding integer value from the word_index dictionary. Nothing more, nothing less, certainly no magic involved.
  4. \n
\n\n

Why don't combine them? Because you almost always fit once and convert to sequences many times. You will fit on your training corpus once and use that exact same word_index dictionary at train / eval / testing / prediction time to convert actual text into sequences to feed them to the network. So it makes sense to keep those methods separate.

\n", + "system": "" + }, + { + "instruction": "Which parameters should be used for early stopping?", + "input": "", + "output": "

\"early

\n\n

Early stopping is basically stopping the training once your loss starts to increase (or in other words validation accuracy starts to decrease). According to documents it is used as follows;

\n\n
keras.callbacks.EarlyStopping(monitor='val_loss',\n                              min_delta=0,\n                              patience=0,\n                              verbose=0, mode='auto')\n
\n\n

Values depends on your implementation (problem, batch size etc...) but generally to prevent overfitting I would use;

\n\n
    \n
  1. Monitor the validation loss (need to use cross\nvalidation or at least train/test sets) by setting the monitor\nargument to 'val_loss'.
  2. \n
  3. min_delta is a threshold to whether quantify a loss at some epoch as\nimprovement or not. If the difference of loss is below min_delta, it is quantified\nas no improvement. Better to leave it as 0 since we're interested in\nwhen loss becomes worse.
  4. \n
  5. patience argument represents the number of epochs before stopping once your loss starts to increase (stops improving).\nThis depends on your implementation, if you use very small batches\nor a large learning rate your loss zig-zag (accuracy will be more noisy) so better set a\nlarge patience argument. If you use large batches and a small\nlearning rate your loss will be smoother so you can use a\nsmaller patience argument. Either way I'll leave it as 2 so I would\ngive the model more chance.
  6. \n
  7. verbose decides what to print, leave it at default (0).
  8. \n
  9. mode argument depends on what direction your monitored quantity\nhas (is it supposed to be decreasing or increasing), since we monitor the loss, we can use min. But let's leave keras\nhandle that for us and set that to auto
  10. \n
\n\n

So I would use something like this and experiment by plotting the error loss with and without early stopping.

\n\n
keras.callbacks.EarlyStopping(monitor='val_loss',\n                              min_delta=0,\n                              patience=2,\n                              verbose=0, mode='auto')\n
\n\n
\n\n

For possible ambiguity on how callbacks work, I'll try to explain more. Once you call fit(... callbacks=[es]) on your model, Keras calls given callback objects predetermined functions. These functions can be called on_train_begin, on_train_end, on_epoch_begin, on_epoch_end and on_batch_begin, on_batch_end. Early stopping callback is called on every epoch end, compares the best monitored value with the current one and stops if conditions are met (how many epochs have past since the observation of the best monitored value and is it more than patience argument, the difference between last value is bigger than min_delta etc..).

\n\n

As pointed by @BrentFaust in comments, model's training will continue until either Early Stopping conditions are met or epochs parameter (default=10) in fit() is satisfied. Setting an Early Stopping callback will not make the model to train beyond its epochs parameter. So calling fit() function with a larger epochs value would benefit more from Early Stopping callback.

\n", + "system": "" + }, + { + "instruction": "Why doesn't plt.imshow() display the image?", + "input": "", + "output": "

The solution was as simple as adding plt.show() at the end of the code snippet:

\n\n
import numpy as np\nnp.random.seed(123)\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation, Flatten\nfrom keras.layers import Convolution2D, MaxPooling2D\nfrom keras.utils import np_utils\nfrom keras.datasets import mnist\n(X_train,y_train),(X_test,y_test) = mnist.load_data()\nprint X_train.shape\nfrom matplotlib import pyplot as plt\nplt.imshow(X_train[0])\nplt.show()\n
\n", + "system": "" + }, + { + "instruction": "How to get reproducible results in keras", + "input": "", + "output": "

You can find the answer at the Keras docs: https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development.

\n\n

In short, to be absolutely sure that you will get reproducible results with your python script on one computer's/laptop's CPU then you will have to do the following:

\n\n
    \n
  1. Set the PYTHONHASHSEED environment variable at a fixed value
  2. \n
  3. Set the python built-in pseudo-random generator at a fixed value
  4. \n
  5. Set the numpy pseudo-random generator at a fixed value
  6. \n
  7. Set the tensorflow pseudo-random generator at a fixed value
  8. \n
  9. Configure a new global tensorflow session
  10. \n
\n\n

Following the Keras link at the top, the source code I am using is the following:

\n\n
# Seed value\n# Apparently you may use different seed values at each stage\nseed_value= 0\n\n# 1. Set the `PYTHONHASHSEED` environment variable at a fixed value\nimport os\nos.environ['PYTHONHASHSEED']=str(seed_value)\n\n# 2. Set the `python` built-in pseudo-random generator at a fixed value\nimport random\nrandom.seed(seed_value)\n\n# 3. Set the `numpy` pseudo-random generator at a fixed value\nimport numpy as np\nnp.random.seed(seed_value)\n\n# 4. Set the `tensorflow` pseudo-random generator at a fixed value\nimport tensorflow as tf\ntf.random.set_seed(seed_value)\n# for later versions: \n# tf.compat.v1.set_random_seed(seed_value)\n\n# 5. Configure a new global `tensorflow` session\nfrom keras import backend as K\nsession_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)\nsess = tf.Session(graph=tf.get_default_graph(), config=session_conf)\nK.set_session(sess)\n# for later versions:\n# session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)\n# sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)\n# tf.compat.v1.keras.backend.set_session(sess)\n
\n\n

It is needless to say that you do not have to to specify any seed or random_state at the numpy, scikit-learn or tensorflow/keras functions that you are using in your python script exactly because with the source code above we set globally their pseudo-random generators at a fixed value.

\n", + "system": "" + }, + { + "instruction": "Keras, how do I predict after I trained a model?", + "input": "", + "output": "

model.predict() expects the first parameter to be a numpy array. You supply a list, which does not have the shape attribute a numpy array has.

\n\n

Otherwise your code looks fine, except that you are doing nothing with the prediction. Make sure you store it in a variable, for example like this:

\n\n
prediction = model.predict(np.array(tk.texts_to_sequences(text)))\nprint(prediction)\n
\n", + "system": "" + }, + { + "instruction": "What is an Embedding in Keras?", + "input": "", + "output": "

As far as I know, the Embedding layer is a simple matrix multiplication that transforms words into their corresponding word embeddings.

\n\n

The weights of the Embedding layer are of the shape (vocabulary_size, embedding_dimension). For each training sample, its input are integers, which represent certain words. The integers are in the range of the vocabulary size. The Embedding layer transforms each integer i into the ith line of the embedding weights matrix.

\n\n

In order to quickly do this as a matrix multiplication, the input integers are not stored as a list of integers but as a one-hot matrix. Therefore the input shape is (nb_words, vocabulary_size) with one non-zero value per line. If you multiply this by the embedding weights, you get the output in the shape

\n\n
(nb_words, vocab_size) x (vocab_size, embedding_dim) = (nb_words, embedding_dim)\n
\n\n

So with a simple matrix multiplication you transform all the words in a sample into the corresponding word embeddings.

\n", + "system": "" + }, + { + "instruction": "How to check which version of Keras is installed?", + "input": "", + "output": "

Python library authors put the version number in <module>.__version__. You can print it by running this on the command line:

\n\n
python -c 'import keras; print(keras.__version__)'\n
\n\n

If it's Windows terminal, enclose snippet with double-quotes like below

\n\n
python -c \"import keras; print(keras.__version__)\"\n
\n", + "system": "" + }, + { + "instruction": "Keras: Difference between Kernel and Activity regularizers", + "input": "", + "output": "

The activity regularizer works as a function of the output of the net, and is mostly used to regularize hidden units, while weight_regularizer, as the name says, works on the weights (e.g. making them decay). Basically you can express the regularization loss as a function of the output (activity_regularizer) or of the weights (weight_regularizer).

\n

The new kernel_regularizer replaces weight_regularizer - although it's not very clear from the documentation.

\n

From the definition of kernel_regularizer:

\n
\n

kernel_regularizer: Regularizer function applied to\nthe kernel weights matrix\n(see regularizer).

\n
\n

And activity_regularizer:

\n
\n

activity_regularizer: Regularizer function applied to\nthe output of the layer (its "activation").\n(see regularizer).

\n
\n

Important Edit: Note that there is a bug in the activity_regularizer that was only fixed in version 2.1.4 of Keras (at least with Tensorflow backend). Indeed, in the older versions, the activity regularizer function is applied to the input of the layer, instead of being applied to the output (the actual activations of the layer, as intended). So beware if you are using an older version of Keras (before 2.1.4), activity regularization may probably not work as intended.

\n

You can see the commit on GitHub

\n

Five months ago Fran\u00e7ois Chollet provided a fix to the activity regularizer, that was then included in Keras 2.1.4

\n", + "system": "" + }, + { + "instruction": "What's the difference between a bidirectional LSTM and an LSTM?", + "input": "", + "output": "

LSTM in its core, preserves information from inputs that has already passed through it using the hidden state.

\n\n

Unidirectional LSTM only preserves information of the past because the only inputs it has seen are from the past.

\n\n

Using bidirectional will run your inputs in two ways, one from past to future and one from future to past and what differs this approach from unidirectional is that in the LSTM that runs backwards you preserve information from the future and using the two hidden states combined you are able in any point in time to preserve information from both past and future.

\n\n

What they are suited for is a very complicated question but BiLSTMs show very good results as they can understand context better, I will try to explain through an example.

\n\n

Lets say we try to predict the next word in a sentence, on a high level what a unidirectional LSTM will see is

\n\n
\n

The boys went to ....

\n
\n\n

And will try to predict the next word only by this context, with bidirectional LSTM you will be able to see information further down the road for example

\n\n

Forward LSTM:

\n\n
\n

The boys went to ...

\n
\n\n

Backward LSTM:

\n\n
\n

... and then they got out of the pool

\n
\n\n

You can see that using the information from the future it could be easier for the network to understand what the next word is.

\n", + "system": "" + }, + { + "instruction": "Does Any one got "AttributeError: 'str' object has no attribute 'decode' " , while Loading a Keras Saved Model", + "input": "", + "output": "

For me the solution was downgrading the h5py package (in my case to 2.10.0), apparently putting back only Keras and Tensorflow to the correct versions was not enough.

\n", + "system": "" + }, + { + "instruction": "What is validation data used for in a Keras Sequential model?", + "input": "", + "output": "

If you want to build a solid model you have to follow that specific protocol of splitting your data into three sets: One for training, one for validation and one for final evaluation, which is the test set.

\n

The idea is that you train on your training data and tune your model with the results of metrics (accuracy, loss etc) that you get from your validation set.

\n

Your model doesn't "see" your validation set and isn't in any way trained on it, but you as the architect and master of the hyperparameters tune the model according to this data. Therefore it indirectly influences your model because it directly influences your design decisions. You nudge your model to work well with the validation data and that can possibly bring in a tilt.

\n

Exactly that is the reason you only evaluate your model's final score on data that neither your model nor you yourself has used \u2013 and that is the third chunk of data, your test set.

\n

Only this procedure makes sure you get an unaffected view of your models quality and ability to generalize what is has learned on totally unseen data.

\n", + "system": "" + }, + { + "instruction": "Keras: the difference between LSTM dropout and LSTM recurrent dropout", + "input": "", + "output": "

I suggest taking a look at (the first part of) this paper. Regular dropout is applied on the inputs and/or the outputs, meaning the vertical arrows from x_t and to h_t. In your case, if you add it as an argument to your layer, it will mask the inputs; you can add a Dropout layer after your recurrent layer to mask the outputs as well. Recurrent dropout masks (or \"drops\") the connections between the recurrent units; that would be the horizontal arrows in your picture.

\n\n

This picture is taken from the paper above. On the left, regular dropout on inputs and outputs. On the right, regular dropout PLUS recurrent dropout:

\n\n

\"This

\n\n

(Ignore the colour of the arrows in this case; in the paper they are making a further point of keeping the same dropout masks at each timestep)

\n", + "system": "" + }, + { + "instruction": "What is the meaning of axis=-1 in keras.argmax?", + "input": "", + "output": "

This means that the index that will be returned by argmax will be taken from the last axis.

\n

Your data has some shape (20,19,5,80), I changed the first dimension just to make it clearer. This means:

\n\n

Now, negative numbers work exactly like in python lists, in numpy arrays, etc. Negative numbers represent the inverse order:

\n\n

When you pass the axis parameter to the argmax function, the indices returned will be based on this axis. Your results will lose this specific axes, but keep the others.

\n

See what shape argmax will return for each index:

\n\n", + "system": "" + }, + { + "instruction": "What is the difference between loss function and metric in Keras?", + "input": "", + "output": "

The loss function is used to optimize your model. This is the function that will get minimized by the optimizer.

\n\n

A metric is used to judge the performance of your model. This is only for you to look at and has nothing to do with the optimization process.

\n", + "system": "" + }, + { + "instruction": "How to tell Keras stop training based on loss value?", + "input": "", + "output": "

I found the answer. I looked into Keras sources and find out code for EarlyStopping. I made my own callback, based on it:

\n\n
class EarlyStoppingByLossVal(Callback):\n    def __init__(self, monitor='val_loss', value=0.00001, verbose=0):\n        super(Callback, self).__init__()\n        self.monitor = monitor\n        self.value = value\n        self.verbose = verbose\n\n    def on_epoch_end(self, epoch, logs={}):\n        current = logs.get(self.monitor)\n        if current is None:\n            warnings.warn(\"Early stopping requires %s available!\" % self.monitor, RuntimeWarning)\n\n        if current < self.value:\n            if self.verbose > 0:\n                print(\"Epoch %05d: early stopping THR\" % epoch)\n            self.model.stop_training = True\n
\n\n

And usage:

\n\n
callbacks = [\n    EarlyStoppingByLossVal(monitor='val_loss', value=0.00001, verbose=1),\n    # EarlyStopping(monitor='val_loss', patience=2, verbose=0),\n    ModelCheckpoint(kfold_weights_path, monitor='val_loss', save_best_only=True, verbose=0),\n]\nmodel.fit(X_train.astype('float32'), Y_train, batch_size=batch_size, nb_epoch=nb_epoch,\n      shuffle=True, verbose=1, validation_data=(X_valid, Y_valid),\n      callbacks=callbacks)\n
\n", + "system": "" + }, + { + "instruction": "Get class labels from Keras functional model", + "input": "", + "output": "
y_prob = model.predict(x) \ny_classes = y_prob.argmax(axis=-1)\n
\n\n

As suggested here.

\n", + "system": "" + }, + { + "instruction": "Keras - Plot training, validation and test set accuracy", + "input": "", + "output": "
import keras\nfrom matplotlib import pyplot as plt\nhistory = model1.fit(train_x, train_y,validation_split = 0.1, epochs=50, batch_size=4)\nplt.plot(history.history['accuracy'])\nplt.plot(history.history['val_accuracy'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['train', 'val'], loc='upper left')\nplt.show()\n
\n

\"Model

\n
plt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'val'], loc='upper left')\nplt.show()\n
\n

\"Model

\n", + "system": "" + }, + { + "instruction": "How to save final model using keras?", + "input": "", + "output": "

The model has a save method, which saves all the details necessary to reconstitute the model. An example from the keras documentation:

\n\n
from keras.models import load_model\n\nmodel.save('my_model.h5')  # creates a HDF5 file 'my_model.h5'\ndel model  # deletes the existing model\n\n# returns a compiled model\n# identical to the previous one\nmodel = load_model('my_model.h5')\n
\n", + "system": "" + }, + { + "instruction": "How does Keras handle multilabel classification?", + "input": "", + "output": "

In short

\n

Don't use softmax.

\n

Use sigmoid for activation of your output layer.

\n

Use binary_crossentropy for loss function.

\n

Use predict for evaluation.

\n

Why

\n

In softmax when increasing score for one label, all others are lowered (it's a probability distribution). You don't want that when you have multiple labels.

\n

Complete Code

\n
from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Dropout, Activation\nfrom tensorflow.keras.optimizers import SGD\n\nmodel = Sequential()\nmodel.add(Dense(5000, activation='relu', input_dim=X_train.shape[1]))\nmodel.add(Dropout(0.1))\nmodel.add(Dense(600, activation='relu'))\nmodel.add(Dropout(0.1))\nmodel.add(Dense(y_train.shape[1], activation='sigmoid'))\n\nsgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)\nmodel.compile(loss='binary_crossentropy',\n              optimizer=sgd)\n\nmodel.fit(X_train, y_train, epochs=5, batch_size=2000)\n\npreds = model.predict(X_test)\npreds[preds>=0.5] = 1\npreds[preds<0.5] = 0\n# score = compare preds and y_test\n
\n", + "system": "" + }, + { + "instruction": "Keras model.summary() result - Understanding the # of Parameters", + "input": "", + "output": "

The number of parameters is 7850 because with every hidden unit you have 784 input weights and one weight of connection with bias. This means that every hidden unit gives you 785 parameters. You have 10 units so it sums up to 7850.

\n\n

The role of this additional bias term is really important. It significantly increases the capacity of your model. You can read details e.g. here Role of Bias in Neural Networks.

\n", + "system": "" + }, + { + "instruction": "How to add and remove new layers in keras after loading weights?", + "input": "", + "output": "

You can take the output of the last model and create a new model. The lower layers remains the same.

\n
model.summary()\nmodel.layers.pop()\nmodel.layers.pop()\nmodel.summary()\n\nx = MaxPooling2D()(model.layers[-1].output)\no = Activation('sigmoid', name='loss')(x)\n\nmodel2 = Model(inputs=in_img, outputs=[o])\nmodel2.summary()\n
\n

Check How to use models from keras.applications for transfer learnig?

\n

Update on Edit:

\n

The new error is because you are trying to create the new model on global in_img which is actually not used in the previous model creation.. there you are actually defining a local in_img. So the global in_img is obviously not connected to the upper layers in the symbolic graph. And it has nothing to do with loading weights.

\n

To better resolve this problem you should instead use model.input to reference to the input.

\n
model3 = Model(input=model2.input, output=[o])\n
\n", + "system": "" + }, + { + "instruction": "Save and load weights in keras", + "input": "", + "output": "

Here is a YouTube video that explains exactly what you're wanting to do: Save and load a Keras model

\n\n

There are three different saving methods that Keras makes available. These are described in the video link above (with examples), as well as below.

\n\n

First, the reason you're receiving the error is because you're calling load_model incorrectly.

\n\n

To save and load the weights of the model, you would first use

\n\n
model.save_weights('my_model_weights.h5')\n
\n\n

to save the weights, as you've displayed. To load the weights, you would first need to build your model, and then call load_weights on the model, as in

\n\n
model.load_weights('my_model_weights.h5')\n
\n\n

Another saving technique is model.save(filepath). This save function saves:

\n\n\n\n

To load this saved model, you would use the following:

\n\n
from keras.models import load_model\nnew_model = load_model(filepath)'\n
\n\n

Lastly, model.to_json(), saves only the architecture of the model. To load the architecture, you would use

\n\n
from keras.models import model_from_json\nmodel = model_from_json(json_string)\n
\n", + "system": "" + }, + { + "instruction": "ImportError('Could not import PIL.Image. ' working with keras-ternsorflow", + "input": "", + "output": "

All you need to do is install pillow:

\n\n
pip install pillow\n
\n\n

Then you should be all set. Found this after hours of searching.

\n", + "system": "" + }, + { + "instruction": "How does keras handle multiple losses?", + "input": "", + "output": "

From model documentation:

\n\n
\n

loss: String (name of objective function) or objective function. See losses. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses.

\n \n

...

\n \n

loss_weights: Optional list or dictionary specifying scalar coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model's outputs. If a tensor, it is expected to map output names (strings) to scalar coefficients.

\n
\n\n

So, yes, the final loss will be the \"weighted sum of all individual losses, weighted by the loss_weights coeffiecients\".

\n\n

You can check the code where the loss is calculated.

\n\n
\n

Also, what does it mean during training? Is the loss2 only used to update the weights on layers where y2 comes from? Or is it used for all the model's layers?

\n
\n\n

The weights are updated through backpropagation, so each loss will affect only layers that connect the input to the loss.

\n\n

For example:

\n\n
                        +----+         \n                        > C  |-->loss1 \n                       /+----+         \n                      /                \n                     /                 \n    +----+    +----+/                  \n -->| A  |--->| B  |\\                  \n    +----+    +----+ \\                 \n                      \\                \n                       \\+----+         \n                        > D  |-->loss2 \n                        +----+         \n
\n\n\n", + "system": "" + }, + { + "instruction": "ImportError: Failed to import pydot. You must install pydot and graphviz for `pydotprint` to work", + "input": "", + "output": "

The following commands solved the problem for me

\n\n
    \n
  1. pip install pydot
  2. \n
  3. pip install pydotplus
  4. \n
  5. sudo apt-get install graphviz
  6. \n
\n", + "system": "" + }, + { + "instruction": "How to return history of validation loss in Keras", + "input": "", + "output": "

Just an example started from

\n\n
history = model.fit(X, Y, validation_split=0.33, nb_epoch=150, batch_size=10, verbose=0)\n
\n\n

You can use

\n\n
print(history.history.keys())\n
\n\n

to list all data in history.

\n\n

Then, you can print the history of validation loss like this:

\n\n
print(history.history['val_loss'])\n
\n", + "system": "" + }, + { + "instruction": "Keras AttributeError: 'Sequential' object has no attribute 'predict_classes'", + "input": "", + "output": "

This function was removed in TensorFlow version 2.6.\nAccording to the keras in rstudio reference

\n

update to

\n
predict_x=model.predict(X_test) \nclasses_x=np.argmax(predict_x,axis=1)\n
\n

Or use TensorFlow 2.5.x .

\n

If you are using TensorFlow version 2.5, you will receive the following warning:

\n
\n

tensorflow\\python\\keras\\engine\\sequential.py:455: UserWarning: model.predict_classes() is deprecated and will be removed after 2021-01-01. Please use instead:* np.argmax(model.predict(x), axis=-1), if your model does multi-class classification (e.g. if it uses a softmax last-layer activation).* (model.predict(x) > 0.5).astype("int32"), if your model does binary classification (e.g. if it uses a sigmoid last-layer activation).

\n
\n", + "system": "" + }, + { + "instruction": "How to find Number of parameters of a keras model?", + "input": "", + "output": "

Models and layers have special method for that purpose:

\n\n
model.count_params()\n
\n\n

Also, to get a short summary of each layer dimensions and parameters, you might find useful the following method

\n\n
model.summary()\n
\n", + "system": "" + }, + { + "instruction": "Error when checking model input: expected convolution2d_input_1 to have 4 dimensions, but got array with shape (32, 32, 3)", + "input": "", + "output": "

The input shape you have defined is the shape of a single sample. The model itself expects some array of samples as input (even if its an array of length 1).

\n

Your output really should be 4-d, with the 1st dimension to enumerate the samples. i.e. for a single image you should return a shape of (1, 32, 32, 3).

\n

You can find more information here under "Convolution2D"/"Input shape"

\n

Edit: Based on Danny's comment below, if you want a batch size of 1, you can add the missing dimension using this:

\n
image = np.expand_dims(image, axis=0)\n
\n", + "system": "" + }, + { + "instruction": "ImportError: cannot import name 'adam' from 'keras.optimizers'", + "input": "", + "output": "

There are two types of modules -

\n
    \n
  1. keras
  2. \n
  3. tensorflow.keras
  4. \n
\n

Here we need to use tensorflow.keras

\n

You need to import Adam (With Capital A) from tensorflow - Keras ( Not only Keras).

\n
from tensorflow.keras.optimizers import Adam\n\nfrom tensorflow.keras.optimizers import Adam # - Works\nfrom tensorflow.keras.optimizers import adam # - Does not work\nfrom keras.optimizers import Adam            # - Does not work\nfrom keras.optimizers import adam            # - Does not work\n
\n", + "system": "" + }, + { + "instruction": "what is the default kernel_initializer in keras", + "input": "", + "output": "

Usually, it's glorot_uniform by default. Different layer types might have different default kernel_initializer. When in doubt, just look in the source code. For example, for Dense layer:

\n\n
class Dense(Layer):\n...\n    def __init__(self, units,\n                 activation=None,\n                 use_bias=True,\n                 kernel_initializer='glorot_uniform',\n                 bias_initializer='zeros',\n                 kernel_regularizer=None,\n                 bias_regularizer=None,\n                 activity_regularizer=None,\n                 kernel_constraint=None,\n                 bias_constraint=None,\n                 **kwargs):\n
\n", + "system": "" + }, + { + "instruction": "Saving best model in keras", + "input": "", + "output": "

EarlyStopping and ModelCheckpoint is what you need from Keras documentation.

\n\n

You should set save_best_only=True in ModelCheckpoint. If any other adjustments needed, are trivial.

\n\n

Just to help you more you can see a usage here on Kaggle.

\n\n
\n\n

Adding the code here in case the above Kaggle example link is not available:

\n\n
model = getModel()\nmodel.summary()\n\nbatch_size = 32\n\nearlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')\nmcp_save = ModelCheckpoint('.mdl_wts.hdf5', save_best_only=True, monitor='val_loss', mode='min')\nreduce_lr_loss = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=7, verbose=1, epsilon=1e-4, mode='min')\n\nmodel.fit(Xtr_more, Ytr_more, batch_size=batch_size, epochs=50, verbose=0, callbacks=[earlyStopping, mcp_save, reduce_lr_loss], validation_split=0.25)\n
\n", + "system": "" + }, + { + "instruction": "What is the difference between Keras model.evaluate() and model.predict()?", + "input": "", + "output": "

The model.evaluate function predicts the output for the given input and then computes the metrics function specified in the model.compile and based on y_true and y_pred and returns the computed metric value as the output.

\n\n

The model.predict just returns back the y_pred

\n\n

So if you use model.predict and then compute the metrics yourself, the computed metric value should turn out to be the same as model.evaluate

\n\n

For example, one would use model.predict instead of model.evaluate in evaluating an RNN/ LSTM based models where the output needs to be fed as input in next time step

\n", + "system": "" + }, + { + "instruction": "Keras Text Preprocessing - Saving Tokenizer object to file for scoring", + "input": "", + "output": "

The most common way is to use either pickle or joblib. Here you have an example on how to use pickle in order to save Tokenizer:

\n\n
import pickle\n\n# saving\nwith open('tokenizer.pickle', 'wb') as handle:\n    pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)\n\n# loading\nwith open('tokenizer.pickle', 'rb') as handle:\n    tokenizer = pickle.load(handle)\n
\n", + "system": "" + }, + { + "instruction": "How do I install Keras and Theano in Anaconda Python on Windows?", + "input": "", + "output": "

It is my solution for the same problem

\n\n\n", + "system": "" + }, + { + "instruction": "How to log Keras loss output to a file", + "input": "", + "output": "

You can use CSVLogger callback.

\n\n

as example:

\n\n
from keras.callbacks import CSVLogger\n\ncsv_logger = CSVLogger('log.csv', append=True, separator=';')\nmodel.fit(X_train, Y_train, callbacks=[csv_logger])\n
\n\n

Look at: Keras Callbacks

\n", + "system": "" + }, + { + "instruction": "How to compute Receiving Operating Characteristic (ROC) and AUC in keras?", + "input": "", + "output": "

Due to that you can't calculate ROC&AUC by mini-batches, you can only calculate it on the end of one epoch. There is a solution from jamartinh, I patch the code below for convenience:

\n
from sklearn.metrics import roc_auc_score\nfrom keras.callbacks import Callback\nclass RocCallback(Callback):\n    def __init__(self,training_data,validation_data):\n        self.x = training_data[0]\n        self.y = training_data[1]\n        self.x_val = validation_data[0]\n        self.y_val = validation_data[1]\n\n\n    def on_train_begin(self, logs={}):\n        return\n\n    def on_train_end(self, logs={}):\n        return\n\n    def on_epoch_begin(self, epoch, logs={}):\n        return\n\n    def on_epoch_end(self, epoch, logs={}):\n        y_pred_train = self.model.predict_proba(self.x)\n        roc_train = roc_auc_score(self.y, y_pred_train)\n        y_pred_val = self.model.predict_proba(self.x_val)\n        roc_val = roc_auc_score(self.y_val, y_pred_val)\n        print('\\rroc-auc_train: %s - roc-auc_val: %s' % (str(round(roc_train,4)),str(round(roc_val,4))),end=100*' '+'\\n')\n        return\n\n    def on_batch_begin(self, batch, logs={}):\n        return\n\n    def on_batch_end(self, batch, logs={}):\n        return\n\nroc = RocCallback(training_data=(X_train, y_train),\n                  validation_data=(X_test, y_test))\n\nmodel.fit(X_train, y_train, \n          validation_data=(X_test, y_test),\n          callbacks=[roc])\n
\n

A more hackable way using tf.contrib.metrics.streaming_auc:

\n
import numpy as np\nimport tensorflow as tf\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.datasets import make_classification\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.utils import np_utils\nfrom keras.callbacks import Callback, EarlyStopping\n\n\n# define roc_callback, inspired by https://github.com/keras-team/keras/issues/6050#issuecomment-329996505\ndef auc_roc(y_true, y_pred):\n    # any tensorflow metric\n    value, update_op = tf.contrib.metrics.streaming_auc(y_pred, y_true)\n\n    # find all variables created for this metric\n    metric_vars = [i for i in tf.local_variables() if 'auc_roc' in i.name.split('/')[1]]\n\n    # Add metric variables to GLOBAL_VARIABLES collection.\n    # They will be initialized for new session.\n    for v in metric_vars:\n        tf.add_to_collection(tf.GraphKeys.GLOBAL_VARIABLES, v)\n\n    # force to update metric values\n    with tf.control_dependencies([update_op]):\n        value = tf.identity(value)\n        return value\n\n# generation a small dataset\nN_all = 10000\nN_tr = int(0.7 * N_all)\nN_te = N_all - N_tr\nX, y = make_classification(n_samples=N_all, n_features=20, n_classes=2)\ny = np_utils.to_categorical(y, num_classes=2)\n\nX_train, X_valid = X[:N_tr, :], X[N_tr:, :]\ny_train, y_valid = y[:N_tr, :], y[N_tr:, :]\n\n# model & train\nmodel = Sequential()\nmodel.add(Dense(2, activation="softmax", input_shape=(X.shape[1],)))\n\nmodel.compile(loss='categorical_crossentropy',\n              optimizer='adam',\n              metrics=['accuracy', auc_roc])\n\nmy_callbacks = [EarlyStopping(monitor='auc_roc', patience=300, verbose=1, mode='max')]\n\nmodel.fit(X, y,\n          validation_split=0.3,\n          shuffle=True,\n          batch_size=32, nb_epoch=5, verbose=1,\n          callbacks=my_callbacks)\n\n# # or use independent valid set\n# model.fit(X_train, y_train,\n#           validation_data=(X_valid, y_valid),\n#           batch_size=32, nb_epoch=5, verbose=1,\n#           callbacks=my_callbacks)\n
\n", + "system": "" + }, + { + "instruction": "How to use return_sequences option and TimeDistributed layer in Keras?", + "input": "", + "output": "

The LSTM layer and the TimeDistributed wrapper are two different ways to get the \"many to many\" relationship that you want.

\n\n
    \n
  1. LSTM will eat the words of your sentence one by one, you can chose via \"return_sequence\" to outuput something (the state) at each step (after each word processed) or only output something after the last word has been eaten. So with return_sequence=TRUE, the output will be a sequence of the same length, with return_sequence=FALSE, the output will be just one vector.
  2. \n
  3. TimeDistributed. This wrapper allows you to apply one layer (say Dense for example) to every element of your sequence independently. That layer will have exactly the same weights for every element, it's the same that will be applied to each words and it will, of course, return the sequence of words processed independently.
  4. \n
\n\n

As you can see, the difference between the two is that the LSTM \"propagates the information through the sequence, it will eat one word, update its state and return it or not. Then it will go on with the next word while still carrying information from the previous ones.... as in the TimeDistributed, the words will be processed in the same way on their own, as if they were in silos and the same layer applies to every one of them.

\n\n

So you dont have to use LSTM and TimeDistributed in a row, you can do whatever you want, just keep in mind what each of them do.

\n\n

I hope it's clearer?

\n\n

EDIT:

\n\n

The time distributed, in your case, applies a dense layer to every element that was output by the LSTM.

\n\n

Let's take an example:

\n\n

You have a sequence of n_words words that are embedded in emb_size dimensions. So your input is a 2D tensor of shape (n_words, emb_size)

\n\n

First you apply an LSTM with output dimension = lstm_output and return_sequence = True. The output will still be a squence so it will be a 2D tensor of shape (n_words, lstm_output). \nSo you have n_words vectors of length lstm_output.

\n\n

Now you apply a TimeDistributed dense layer with say 3 dimensions output as parameter of the Dense. So TimeDistributed(Dense(3)). \nThis will apply Dense(3) n_words times, to every vectors of size lstm_output in your sequence independently... they will all become vectors of length 3. Your output will still be a sequence so a 2D tensor, of shape now (n_words, 3).

\n\n

Is it clearer? :-)

\n", + "system": "" + }, + { + "instruction": "What is the correct way to change image channel ordering between channels first and channels last?", + "input": "", + "output": "

I agree with @Qualia 's comment, np.moveaxis(a, source, destination) is easier to understand. This does the job:

\n\n
x = np.zeros((12, 12, 3))\nx.shape\n#yields: \n(12, 12, 3)\n\nx = np.moveaxis(x, -1, 0)\nx.shape\n#yields: \n(3, 12, 12)\n
\n", + "system": "" + }, + { + "instruction": "Show progress bar for each epoch during batchwise training in Keras", + "input": "", + "output": "
    \n
  1. \n
\n
model.fit(X, y, nb_epoch=40, batch_size=32, validation_split=0.2, verbose=1)\n
\n

In the above change to verbose=2, as it is mentioned in the documentation:

\n
\n

verbose: 0 for no logging to stdout, 1 for progress bar logging, 2 for one log line per epoch

\n
\n

It'll show your output as:

\n
Epoch 1/100\n0s - loss: 0.2506 - acc: 0.5750 - val_loss: 0.2501 - val_acc: 0.3750\nEpoch 2/100\n0s - loss: 0.2487 - acc: 0.6250 - val_loss: 0.2498 - val_acc: 0.6250\nEpoch 3/100\n0s - loss: 0.2495 - acc: 0.5750 - val_loss: 0.2496 - val_acc: 0.6250\n.....\n.....\n
\n
    \n
  1. \n
\n

If you want to show a progress bar for completion of epochs, keep verbose=0 (which shuts out logging to stdout) and implement in the following manner:

\n
from time import sleep\nimport sys\n\nepochs = 10\n\nfor e in range(epochs):\n    sys.stdout.write('\\r')\n\n    for X, y in data.next_batch():\n        model.fit(X, y, nb_epoch=1, batch_size=data.batch_size, verbose=0)\n\n    # print loss and accuracy\n\n    # the exact output you're looking for:\n    sys.stdout.write("[%-60s] %d%%" % ('='*(60*(e+1)/10), (100*(e+1)/10)))\n    sys.stdout.flush()\n    sys.stdout.write(", epoch %d"% (e+1))\n    sys.stdout.flush()\n
\n

The output will be as follows:

\n
[============================================================] 100%, epoch 10\n
\n
    \n
  1. \n
\n

If you want to show loss after every n batches, you can use:

\n
out_batch = NBatchLogger(display=1000)\nmodel.fit([X_train_aux,X_train_main],Y_train,batch_size=128,callbacks=[out_batch])\n
\n

Though, I haven't ever tried it before. The above example was taken from this keras github issue: Show Loss Every N Batches #2850

\n

You can also follow a demo of NBatchLogger here:

\n
class NBatchLogger(Callback):\n    def __init__(self, display):\n        self.seen = 0\n        self.display = display\n\n    def on_batch_end(self, batch, logs={}):\n        self.seen += logs.get('size', 0)\n        if self.seen % self.display == 0:\n            metrics_log = ''\n            for k in self.params['metrics']:\n                if k in logs:\n                    val = logs[k]\n                    if abs(val) > 1e-3:\n                        metrics_log += ' - %s: %.4f' % (k, val)\n                    else:\n                        metrics_log += ' - %s: %.4e' % (k, val)\n            print('{}/{} ... {}'.format(self.seen,\n                                        self.params['samples'],\n                                        metrics_log))\n
\n
    \n
  1. \n
\n

You can also use progbar for progress, but it'll print progress batchwise

\n
from keras.utils import generic_utils\n\nprogbar = generic_utils.Progbar(X_train.shape[0])\n\nfor X_batch, Y_batch in datagen.flow(X_train, Y_train):\n    loss, acc = model_test.train([X_batch]*2, Y_batch, accuracy=True)\n    progbar.add(X_batch.shape[0], values=[("train loss", loss), ("acc", acc)])\n
\n", + "system": "" + }, + { + "instruction": "Keras accuracy does not change", + "input": "", + "output": "

The most likely reason is that the optimizer is not suited to your dataset. Here is a list of Keras optimizers from the documentation.

\n\n

I recommend you first try SGD with default parameter values. If it still doesn't work, divide the learning rate by 10. Do that a few times if necessary. If your learning rate reaches 1e-6 and it still doesn't work, then you have another problem.

\n\n

In summary, replace this line:

\n\n
model.compile(loss = \"categorical_crossentropy\", optimizer = \"adam\")\n
\n\n

with this:

\n\n
from keras.optimizers import SGD\nopt = SGD(lr=0.01)\nmodel.compile(loss = \"categorical_crossentropy\", optimizer = opt)\n
\n\n

and change the learning rate a few times if it doesn't work.

\n\n

If it was the problem, you should see the loss getting lower after just a few epochs.

\n", + "system": "" + }, + { + "instruction": "How do I create a variable-length input LSTM in Keras?", + "input": "", + "output": "

I am not clear about the embedding procedure. But still here is a way to implement a variable-length input LSTM. Just do not specify the timespan dimension when building LSTM.

\n\n\n\n
import keras.backend as K\nfrom keras.layers import LSTM, Input\n\nI = Input(shape=(None, 200)) # unknown timespan, fixed feature size\nlstm = LSTM(20)\nf = K.function(inputs=[I], outputs=[lstm(I)])\n\nimport numpy as np\ndata1 = np.random.random(size=(1, 100, 200)) # batch_size = 1, timespan = 100\nprint f([data1])[0].shape\n# (1, 20)\n\ndata2 = np.random.random(size=(1, 314, 200)) # batch_size = 1, timespan = 314\nprint f([data2])[0].shape\n# (1, 20)\n
\n", + "system": "" + }, + { + "instruction": "Meaning of validation_steps in Keras Sequential fit_generator parameter list", + "input": "", + "output": "

The validation generator works exactly like the training generator. You define how many batches it will wield per epoch.

\n\n\n\n

But validation data has absolutely no relation to training data. \nThere is no need to separate validation batches according to training batches (I would even say that there is no point in doing that, unless you have a very specific intention). Also, the total number of samples in training data is not related to the total number of samples in test data.

\n\n

The point of having many batches is just to spare your computer's memory, so you test smaller packs one at a time. Probably, you find a batch size that will fit your memory or expected training time and use that size.

\n\n

That said, Keras gives you a totally free method, so you can determine the training and the validation batches as you wish.

\n\n

Epochs:

\n\n

Ideally, you use all your validation data at once. If you use only part of your validation data, you will get different metrics for each batch, what may make you think that your model got worse or better when it actually didn't, you just measured different validation sets.

\n\n

That's why they suggest validation_steps = total_validation_samples // validation_batch_size.
\nTheoretically, you test your entire data every epoch, as you theoretically should also train your entire data every epoch.

\n\n

So, theorethycally each epoch yields:

\n\n\n\n

Basically, the two vars are: how many batches per epoch you will yield.
\nThis makes sure that at each epoch:

\n\n\n\n

Nevertheless, it's totally up to you how you separate your training and validation data.

\n\n

If you do want to have one different batch per epoch (epochs using less than your entire data), it's ok, just pass steps_per_epoch=1 or validation_steps=1, for instance. The generator is not resetted after each epoch, so the second epoch will take the second batch, and so on, until it loops again to the first batch.

\n\n

I prefer training the entire data per epoch, and if the time is too long, I use a callback that shows the logs at the end of each batch:

\n\n
from keras.callbacks import LambdaCallback\n\ncallbacks = callbacks=[LambdaCallback(on_batch_end=lambda batch,logs:print(logs))]\n
\n\n

Multiprocessing

\n\n

I was never able to use use_multiprocessing=True, it freezes at the start of the first epoch.

\n\n

I've noticed the workers are related to how many batches are preloaded from the generator. If you define max_queue_size=1, you will have exactly workers amount of batches preloaded.

\n\n

They suggest you use keras Sequences when multiprocessing. The sequences work pretty much as a generator, but it keeps track of the order/position of each batch.

\n", + "system": "" + }, + { + "instruction": "Keras Conv2D and input channels", + "input": "", + "output": "

It might be confusing that it is called Conv2D layer (it was to me, which is why I came looking for this answer), because as Nilesh Birari commented:

\n\n
\n

I guess you are missing it's 3D kernel [width, height, depth]. So the result is summation across channels.

\n
\n\n

Perhaps the 2D stems from the fact that the kernel only slides along two dimensions, the third dimension is fixed and determined by the number of input channels (the input depth).

\n\n

For a more elaborate explanation, read https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/

\n\n

I plucked an illustrative image from there:

\n\n

\"kernel

\n", + "system": "" + }, + { + "instruction": "How do you create a custom activation function with Keras?", + "input": "", + "output": "

Credits to this Github issue comment by Ritchie Ng.

\n\n
# Creating a model\nfrom keras.models import Sequential\nfrom keras.layers import Dense\n\n# Custom activation function\nfrom keras.layers import Activation\nfrom keras import backend as K\nfrom keras.utils.generic_utils import get_custom_objects\n\n\ndef custom_activation(x):\n    return (K.sigmoid(x) * 5) - 1\n\nget_custom_objects().update({'custom_activation': Activation(custom_activation)})\n\n# Usage\nmodel = Sequential()\nmodel.add(Dense(32, input_dim=784))\nmodel.add(Activation(custom_activation, name='SpecialActivation'))\nprint(model.summary())\n
\n\n

Please keep in mind that you have to import this function when you save and restore the model. See the note of keras-contrib.

\n", + "system": "" + }, + { + "instruction": "Why does prediction needs batch size in Keras?", + "input": "", + "output": "

Keras can predict multiple values at the same time, like if you input a vector of 100 elements, Keras can compute one prediction for each element, giving 100 outputs. This computation can also be done in batches, defined by the batch_size.

\n\n

This is just in case you cannot fit all the data in the CPU/GPU RAM at the same time and batch processing is needed.

\n", + "system": "" + }, + { + "instruction": "What is the difference between an Embedding Layer and a Dense Layer?", + "input": "", + "output": "

An embedding layer is faster, because it is essentially the equivalent of a dense layer that makes simplifying assumptions.

\n\n

Imagine a word-to-embedding layer with these weights:

\n\n
w = [[0.1, 0.2, 0.3, 0.4],\n     [0.5, 0.6, 0.7, 0.8],\n     [0.9, 0.0, 0.1, 0.2]]\n
\n\n

A Dense layer will treat these like actual weights with which to perform matrix multiplication. An embedding layer will simply treat these weights as a list of vectors, each vector representing one word; the 0th word in the vocabulary is w[0], 1st is w[1], etc.

\n\n
\n\n

For an example, use the weights above and this sentence:

\n\n
[0, 2, 1, 2]\n
\n\n

A naive Dense-based net needs to convert that sentence to a 1-hot encoding

\n\n
[[1, 0, 0],\n [0, 0, 1],\n [0, 1, 0],\n [0, 0, 1]]\n
\n\n

then do a matrix multiplication

\n\n
[[1 * 0.1 + 0 * 0.5 + 0 * 0.9, 1 * 0.2 + 0 * 0.6 + 0 * 0.0, 1 * 0.3 + 0 * 0.7 + 0 * 0.1, 1 * 0.4 + 0 * 0.8 + 0 * 0.2],\n [0 * 0.1 + 0 * 0.5 + 1 * 0.9, 0 * 0.2 + 0 * 0.6 + 1 * 0.0, 0 * 0.3 + 0 * 0.7 + 1 * 0.1, 0 * 0.4 + 0 * 0.8 + 1 * 0.2],\n [0 * 0.1 + 1 * 0.5 + 0 * 0.9, 0 * 0.2 + 1 * 0.6 + 0 * 0.0, 0 * 0.3 + 1 * 0.7 + 0 * 0.1, 0 * 0.4 + 1 * 0.8 + 0 * 0.2],\n [0 * 0.1 + 0 * 0.5 + 1 * 0.9, 0 * 0.2 + 0 * 0.6 + 1 * 0.0, 0 * 0.3 + 0 * 0.7 + 1 * 0.1, 0 * 0.4 + 0 * 0.8 + 1 * 0.2]]\n
\n\n

=

\n\n
[[0.1, 0.2, 0.3, 0.4],\n [0.9, 0.0, 0.1, 0.2],\n [0.5, 0.6, 0.7, 0.8],\n [0.9, 0.0, 0.1, 0.2]]\n
\n\n
\n\n

However, an Embedding layer simply looks at [0, 2, 1, 2] and takes the weights of the layer at indices zero, two, one, and two to immediately get

\n\n
[w[0],\n w[2],\n w[1],\n w[2]]\n
\n\n

=

\n\n
[[0.1, 0.2, 0.3, 0.4],\n [0.9, 0.0, 0.1, 0.2],\n [0.5, 0.6, 0.7, 0.8],\n [0.9, 0.0, 0.1, 0.2]]\n
\n\n

So it's the same result, just obtained in a hopefully faster way.

\n\n
\n\n

The Embedding layer does have limitations:

\n\n\n\n

However, none of those limitations should matter if you just want to convert an integer-encoded word into an embedding.

\n", + "system": "" + }, + { + "instruction": "Keras: How to save model and continue training?", + "input": "", + "output": "

As it's quite difficult to clarify where the problem is, I created a toy example from your code, and it seems to work alright.

\n\n
import numpy as np\nfrom numpy.testing import assert_allclose\nfrom keras.models import Sequential, load_model\nfrom keras.layers import LSTM, Dropout, Dense\nfrom keras.callbacks import ModelCheckpoint\n\nvec_size = 100\nn_units = 10\n\nx_train = np.random.rand(500, 10, vec_size)\ny_train = np.random.rand(500, vec_size)\n\nmodel = Sequential()\nmodel.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))\nmodel.add(Dropout(0.2))\nmodel.add(LSTM(n_units, return_sequences=True))\nmodel.add(Dropout(0.2))\nmodel.add(LSTM(n_units))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(vec_size, activation='linear'))\nmodel.compile(loss='mean_squared_error', optimizer='adam')\n\n# define the checkpoint\nfilepath = \"model.h5\"\ncheckpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')\ncallbacks_list = [checkpoint]\n\n# fit the model\nmodel.fit(x_train, y_train, epochs=5, batch_size=50, callbacks=callbacks_list)\n\n# load the model\nnew_model = load_model(filepath)\nassert_allclose(model.predict(x_train),\n                new_model.predict(x_train),\n                1e-5)\n\n# fit the model\ncheckpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')\ncallbacks_list = [checkpoint]\nnew_model.fit(x_train, y_train, epochs=5, batch_size=50, callbacks=callbacks_list)\n
\n\n

The loss continues to decrease after model loading. (restarting python also gives no problem)

\n\n
Using TensorFlow backend.\nEpoch 1/5\n500/500 [==============================] - 2s - loss: 0.3216     Epoch 00000: loss improved from inf to 0.32163, saving model to model.h5\nEpoch 2/5\n500/500 [==============================] - 0s - loss: 0.2923     Epoch 00001: loss improved from 0.32163 to 0.29234, saving model to model.h5\nEpoch 3/5\n500/500 [==============================] - 0s - loss: 0.2542     Epoch 00002: loss improved from 0.29234 to 0.25415, saving model to model.h5\nEpoch 4/5\n500/500 [==============================] - 0s - loss: 0.2086     Epoch 00003: loss improved from 0.25415 to 0.20860, saving model to model.h5\nEpoch 5/5\n500/500 [==============================] - 0s - loss: 0.1725     Epoch 00004: loss improved from 0.20860 to 0.17249, saving model to model.h5\n\nEpoch 1/5\n500/500 [==============================] - 0s - loss: 0.1454     Epoch 00000: loss improved from inf to 0.14543, saving model to model.h5\nEpoch 2/5\n500/500 [==============================] - 0s - loss: 0.1289     Epoch 00001: loss improved from 0.14543 to 0.12892, saving model to model.h5\nEpoch 3/5\n500/500 [==============================] - 0s - loss: 0.1169     Epoch 00002: loss improved from 0.12892 to 0.11694, saving model to model.h5\nEpoch 4/5\n500/500 [==============================] - 0s - loss: 0.1097     Epoch 00003: loss improved from 0.11694 to 0.10971, saving model to model.h5\nEpoch 5/5\n500/500 [==============================] - 0s - loss: 0.1057     Epoch 00004: loss improved from 0.10971 to 0.10570, saving model to model.h5\n
\n\n

BTW, redefining the model followed by load_weight() definitely won't work, because save_weight() and load_weight() does not save/load the optimizer.

\n", + "system": "" + }, + { + "instruction": "When does keras reset an LSTM state?", + "input": "", + "output": "

Cheking with some tests, I got to the following conclusion, which is according to the documentation and to Nassim's answer:

\n

First, there isn't a single state in a layer, but one state per sample in the batch. There are batch_size parallel states in such a layer.

\n

Stateful=False

\n

In a stateful=False case, all the states are resetted together after each batch.

\n\n

If all those sequences have length (timesteps) = 7, the practical result of these two batches is:

\n
\n

20 individual sequences, each with length 7

\n
\n

None of the sequences are related. But of course: the weights (not the states) will be unique for the layer, and will represent what the layer has learned from all the sequences.

\n\n

Stateful=True

\n

In this case, there is also the same number of parallel states, but they will simply not be resetted at all.

\n\n

The practical result is: the 10 sequences in the second batch are just continuing the 10 sequences of the first batch, as if there had been no interruption at all.

\n

If each sequence has length (timesteps) = 7, then the actual meaning is:

\n
\n

10 individual sequences, each with length 14

\n
\n

When you see that you reached the total length of the sequences, then you call model.reset_states(), meaning you will not continue the previous sequences anymore, now you will start feeding new sequences.

\n", + "system": "" + }, + { + "instruction": "What is the difference between Keras' MaxPooling1D and GlobalMaxPooling1D functions?", + "input": "", + "output": "

Td;lr GlobalMaxPooling1D for temporal data takes the max vector over the\nsteps dimension. So a tensor with shape [10, 4, 10] becomes a tensor with shape [10, 10] after global pooling. MaxPooling1D takes the max over the steps too but constrained to a pool_size for each stride. So a [10, 4, 10] tensor with pooling_size=2 and stride=1 is a [10, 3, 10] tensor after MaxPooling(pooling_size=2, stride=1)

\n

Long answer with graphic aid

\n

Lets say we have a simple sentence with 4 words and we have some vector encoding for the words (like word2vec embeddings). Of course you wont normally max pool over an embedding Tensor but this should do for an example. Also global pooling works across channels but I'll leave that out of this illustration. Finally, things get slightly more complicated with padding but we dont need that here either.

\n

Suppose we have MaxPooling1D(pool_size=2, strides=1). Then

\n
the  [[.7, -0.2, .1]   | pool size is two                  \nboy   [.8, -.3,  .2]   | so look at two words at a time    | stride=1 will\nwill  [.2, -.1,  .4]     and take the max over those       | move the pool down\nlive  [.4  -.4,  .8]]    2 vectors. Here we looking         1 word. Now we look  \n                            'the' and 'boy'.                'boy' and 'will' and \n                                                            take the max.\n
\n

So that will result in a [1, 3, 3] Tensor with the each timestep being the max over a 2D pool. And since we had 3 pools we have effectively downsampled our timesteps from 4 to 3.

\n

However, if we use GlobalMaxPooling1D we will just take the max vector of that sentence (Tensor) which is probably the vector representation of the word 'live'.

\n

Indeed, here is how GlobalMaxPooling1D is defined in keras

\n
class GlobalMaxPooling1D(_GlobalPooling1D):\n    """Global max pooling operation for temporal data.\n    # Input shape\n        3D tensor with shape: `(batch_size, steps, features)`.\n    # Output shape\n        2D tensor with shape:\n        `(batch_size, features)`\n    """\n\n    def call(self, inputs):\n        return K.max(inputs, axis=1)\n
\n

Hopefully that helps, please ask for me to clarify anything.

\n

Additionally here is a example that you can play with:

\n
import numpy as np\nfrom keras.models import Sequential\nfrom keras.layers import Dense, LSTM, GlobalMaxPooling1D, MaxPooling1D\n\nD = np.random.rand(10, 6, 10)\n\nmodel = Sequential()\nmodel.add(LSTM(16, input_shape=(6, 10), return_sequences=True))\nmodel.add(MaxPooling1D(pool_size=2, strides=1))\nmodel.add(LSTM(10))\nmodel.add(Dense(1))\nmodel.compile(loss='binary_crossentropy', optimizer='sgd')\n\n# print the summary to see how the dimension change after the layers are \n# applied\n\nprint(model.summary())\n\n# try a model with GlobalMaxPooling1D now\n\nmodel = Sequential()\nmodel.add(LSTM(16, input_shape=(6, 10), return_sequences=True))\nmodel.add(GlobalMaxPooling1D())\nmodel.add(Dense(1))\nmodel.compile(loss='binary_crossentropy', optimizer='sgd')\n\nprint(model.summary())\n
\n", + "system": "" + }, + { + "instruction": "How to switch Backend with Keras (from TensorFlow to Theano)", + "input": "", + "output": "

Create a .keras (note the . in front) folder in you home directory and put the keras.json file there.

\n\n

For example, /home/DaniPaniz/.keras/keras.json (or ~/.keras/keras.json in short) if you are on a UNIX like system (MacOS X, Linux, *BSD). On Windows you want to create the folder %USERPROFILE%/.keras and put the JSON file there.

\n\n

Alternatively, you can also set the environment variable KERAS_BACKEND:

\n\n
KERAS_BACKEND=theano python mymodel.py\n
\n", + "system": "" + }, + { + "instruction": "preprocess_input() method in keras", + "input": "", + "output": "

Keras works with batches of images. So, the first dimension is used for the number of samples (or images) you have.

\n\n

When you load a single image, you get the shape of one image, which is (size1,size2,channels).

\n\n

In order to create a batch of images, you need an additional dimension: (samples, size1,size2,channels)

\n\n

The preprocess_input function is meant to adequate your image to the format the model requires.

\n\n

Some models use images with values ranging from 0 to 1. Others from -1 to +1. Others use the \"caffe\" style, that is not normalized, but is centered.

\n\n

From the source code, Resnet is using the caffe style.

\n\n

You don't need to worry about the internal details of preprocess_input. But ideally, you should load images with the keras functions for that (so you guarantee that the images you load are compatible with preprocess_input).

\n", + "system": "" + }, + { + "instruction": "Received a label value of 1 which is outside the valid range of [0, 1) - Python, Keras", + "input": "", + "output": "

Range [0, 1) means every number between 0 and 1, excluding 1. So 1 is not a value in the range [0, 1).

\n\n

I am not 100% sure, but the issue could be due to your choice of loss function. For a binary classification, binary_crossentropy should be a better choice.

\n", + "system": "" + }, + { + "instruction": "cannot import name 'pad_sequences' from 'keras.preprocessing.sequence'", + "input": "", + "output": "

Replace:

\n
from keras.preprocessing.sequence import pad_sequences\n
\n

With:

\n
from keras_preprocessing.sequence import pad_sequences\n
\n", + "system": "" + }, + { + "instruction": "What values are returned from model.evaluate() in Keras?", + "input": "", + "output": "

Quoted from evaluate() method documentation:

\n
\n

Returns

\n

Scalar test loss (if the model has a single output and no metrics) or\nlist of scalars (if the model has multiple outputs and/or metrics).\nThe attribute model.metrics_names will give you the display labels\nfor the scalar outputs.

\n
\n

Therefore, you can use metrics_names property of your model to find out what each of those values corresponds to. For example:

\n
from keras import layers\nfrom keras import models\nimport numpy as np\n\ninput_data = layers.Input(shape=(100,)) \nout_1 = layers.Dense(1)(input_data)\nout_2 = layers.Dense(1)(input_data)\n\nmodel = models.Model(input_data, [out_1, out_2])\nmodel.compile(loss='mse', optimizer='adam', metrics=['mae'])\n\nprint(model.metrics_names)\n
\n

outputs the following:

\n
['loss', 'dense_1_loss', 'dense_2_loss', 'dense_1_mean_absolute_error', 'dense_2_mean_absolute_error']\n
\n

which indicates what each of those numbers you see in the output of evaluate method corresponds to.

\n

Further, if you have many layers then those dense_1 and dense_2 names might be a bit ambiguous. To resolve this ambiguity, you can assign names to your layers using name argument of layers (not necessarily on all of them but only on the input and output layers):

\n
# ...\nout_1 = layers.Dense(1, name='output_1')(input_data)\nout_2 = layers.Dense(1, name='output_2')(input_data)\n# ...\n\nprint(model.metrics_names)\n
\n

which outputs a more clear description:

\n
['loss', 'output_1_loss', 'output_2_loss', 'output_1_mean_absolute_error', 'output_2_mean_absolute_error']\n
\n", + "system": "" + }, + { + "instruction": "How to calculate precision and recall in Keras", + "input": "", + "output": "

Python package keras-metrics could be useful for this (I'm the package's author).

\n\n
import keras\nimport keras_metrics\n\nmodel = models.Sequential()\nmodel.add(keras.layers.Dense(1, activation=\"sigmoid\", input_dim=2))\nmodel.add(keras.layers.Dense(1, activation=\"softmax\"))\n\nmodel.compile(optimizer=\"sgd\",\n              loss=\"binary_crossentropy\",\n              metrics=[keras_metrics.precision(), keras_metrics.recall()])\n
\n\n

UPDATE: Starting with Keras version 2.3.0, such metrics as precision, recall, etc. are provided within library distribution package.

\n\n

The usage is the following:

\n\n
model.compile(optimizer=\"sgd\",\n              loss=\"binary_crossentropy\",\n              metrics=[keras.metrics.Precision(), keras.metrics.Recall()])\n
\n", + "system": "" + }, + { + "instruction": "What does the standard Keras model output mean? What is epoch and loss in Keras?", + "input": "", + "output": "

Just to answer the questions more specifically, here's a definition of epoch and loss:

\n\n

Epoch: A full pass over all of your training data.

\n\n

For example, in your view above, you have 1213 observations. So an epoch concludes when it has finished a training pass over all 1213 of your observations.

\n\n

Loss: A scalar value that we attempt to minimize during our training of the model. The lower the loss, the closer our predictions are to the true labels.

\n\n

This is usually Mean Squared Error (MSE) as David Maust said above, or often in Keras, Categorical Cross Entropy

\n\n
\n\n

What you'd expect to see from running fit on your Keras model, is a decrease in loss over n number of epochs. Your training run is rather abnormal, as your loss is actually increasing. This could be due to a learning rate that is too large, which is causing you to overshoot optima.

\n\n

As jaycode mentioned, you will want to look at your model's performance on unseen data, as this is the general use case of Machine Learning.

\n\n

As such, you should include a list of metrics in your compile method, which could look like:

\n\n
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])\n
\n\n

As well as run your model on validation during the fit method, such as:

\n\n
model.fit(data, labels, validation_split=0.2)\n
\n\n
\n\n

There's a lot more to explain, but hopefully this gets you started.

\n", + "system": "" + }, + { + "instruction": "How to predict input image using trained model in Keras?", + "input": "", + "output": "

If someone is still struggling to make predictions on images, here is the optimized code to load the saved model and make predictions:

\n\n
# Modify 'test1.jpg' and 'test2.jpg' to the images you want to predict on\n\nfrom keras.models import load_model\nfrom keras.preprocessing import image\nimport numpy as np\n\n# dimensions of our images\nimg_width, img_height = 320, 240\n\n# load the model we saved\nmodel = load_model('model.h5')\nmodel.compile(loss='binary_crossentropy',\n              optimizer='rmsprop',\n              metrics=['accuracy'])\n\n# predicting images\nimg = image.load_img('test1.jpg', target_size=(img_width, img_height))\nx = image.img_to_array(img)\nx = np.expand_dims(x, axis=0)\n\nimages = np.vstack([x])\nclasses = model.predict_classes(images, batch_size=10)\nprint classes\n\n# predicting multiple images at once\nimg = image.load_img('test2.jpg', target_size=(img_width, img_height))\ny = image.img_to_array(img)\ny = np.expand_dims(y, axis=0)\n\n# pass the list of multiple images np.vstack()\nimages = np.vstack([x, y])\nclasses = model.predict_classes(images, batch_size=10)\n\n# print the classes, the images belong to\nprint classes\nprint classes[0]\nprint classes[0][0]\n
\n", + "system": "" + }, + { + "instruction": "What does Keras.io.preprocessing.sequence.pad_sequences do?", + "input": "", + "output": "

pad_sequences is used to ensure that all sequences in a list have the same length. By default this is done by padding 0 in the beginning of each sequence until each sequence has the same length as the longest sequence.

\n\n

For example

\n\n
>>> pad_sequences([[1, 2, 3], [3, 4, 5, 6], [7, 8]])\narray([[0, 1, 2, 3],\n       [3, 4, 5, 6],\n       [0, 0, 7, 8]], dtype=int32)\n
\n\n

[3, 4, 5, 6] is the longest sequence, so 0 will be padded to the other sequences so their length matches [3, 4, 5, 6].

\n\n

If you rather want to pad to the end of the sequences you can set padding='post'.

\n\n

If you want to specify the maximum length of each sequence you can use the maxlen argument. This will truncate all sequences longer than maxlen.

\n\n
>>> pad_sequences([[1, 2, 3], [3, 4, 5, 6], [7, 8]], maxlen=3)\narray([[1, 2, 3],\n       [4, 5, 6],\n       [0, 7, 8]], dtype=int32)\n
\n\n

Now each sequence have the length 3 instead.

\n\n

According to the documentation one can control the truncation with the pad_sequences. By default truncating is set to pre, which truncates the beginning part of the sequence. If you rather want to truncate the end part of the sequence you can set it to post.

\n", + "system": "" + }, + { + "instruction": "Calling "fit" multiple times in Keras", + "input": "", + "output": "

This question was raised at the Keras github repository in Issue #4446: Quick Question: can a model be fit for multiple times? It was closed by Fran\u00e7ois Chollet with the following statement:

\n\n
\n

Yes, successive calls to fit will incrementally train the model.

\n
\n\n

So, yes, you can call fit multiple times.

\n", + "system": "" + }, + { + "instruction": "What is the purpose of the add_loss function in Keras?", + "input": "", + "output": "

I'll try to answer the original question of why model.add_loss() is being used instead of specifying a custom loss function to model.compile(loss=...).

\n\n

All loss functions in Keras always take two parameters y_true and y_pred. Have a look at the definition of the various standard loss functions available in Keras, they all have these two parameters. They are the 'targets' (the Y variable in many textbooks) and the actual output of the model. Most standard loss functions can be written as an expression of these two tensors. But some more complex losses cannot be written in that way. For your VAE example this is the case because the loss function also depends on additional tensors, namely z_log_var and z_mean, which are not available to the loss functions. Using model.add_loss() has no such restriction and allows you to write much more complex losses that depend on many other tensors, but it has the inconvenience of being more dependent on the model, whereas the standard loss functions work with just any model.

\n\n

(Note: The code proposed in other answers here are somewhat cheating in as much as they just use global variables to sneak in the additional required dependencies. This makes the loss function not a true function in the mathematical sense. I consider this to be much less clean code and I expect it to be more error-prone.)

\n", + "system": "" + }, + { + "instruction": "What is the use of train_on_batch() in keras?", + "input": "", + "output": "

For this question, it's a simple answer from the primary author:

\n\n
\n

With fit_generator, you can use a generator for the validation data as\n well. In general, I would recommend using fit_generator, but using\n train_on_batch works fine too. These methods only exist for the sake of\n convenience in different use cases, there is no \"correct\" method.

\n
\n\n

train_on_batch allows you to expressly update weights based on a collection of samples you provide, without regard to any fixed batch size. You would use this in cases when that is what you want: to train on an explicit collection of samples. You could use that approach to maintain your own iteration over multiple batches of a traditional training set but allowing fit or fit_generator to iterate batches for you is likely simpler.

\n\n

One case when it might be nice to use train_on_batch is for updating a pre-trained model on a single new batch of samples. Suppose you've already trained and deployed a model, and sometime later you've received a new set of training samples previously never used. You could use train_on_batch to directly update the existing model only on those samples. Other methods can do this too, but it is rather explicit to use train_on_batch for this case.

\n\n

Apart from special cases like this (either where you have some pedagogical reason to maintain your own cursor across different training batches, or else for some type of semi-online training update on a special batch), it is probably better to just always use fit (for data that fits in memory) or fit_generator (for streaming batches of data as a generator).

\n", + "system": "" + }, + { + "instruction": "ImportError: cannot import name np_utils", + "input": "", + "output": "

np_utils is a separate package (and a keras dependency - which doesn't get install with it). Can be installed using pip:

\n\n
pip install np_utils\n
\n\n

using - Keras==2.0.6

\n\n
\n\n

Suggestion:\nFor some odd (and still unknown) reasons, even after installing the import

\n\n
from keras.utils.np_utils import to_categorical\n
\n\n

didn't work - I had to restart the notebook (first restart even didn't work), and once it worked, I got stuck again for same import call (gave exception for no module named tensorflow) - as in utils there's another import from . import conv_utils, which required the tensorflow.

\n\n

I did try installing tensorflow using pip install tensorflow gave:

\n\n
\n

Could not find a version that satisfies the requirement tensorflow\n (from versions: ) No matching distribution found for tensorflow

\n
\n\n

even this gist didn't work for me.

\n\n
\n\n

Finally, I installed Anaconda - which have all the scientific packages (numpy, scipy, scikit-learn,..) pre-installed. Installed keras:

\n\n
conda install keras\n
\n\n

Best thing was, it even installed tensorflow as its dependency.

\n", + "system": "" + }, + { + "instruction": "What is "epoch" in keras.models.Model.fit?", + "input": "", + "output": "

Here is how Keras documentation defines an epoch:

\n\n
\n

Epoch: an arbitrary cutoff, generally defined as \"one pass over the entire dataset\", used to separate training into distinct phases, which is useful for logging and periodic evaluation.

\n
\n\n

So, in other words, a number of epochs means how many times you go through your training set.

\n\n

The model is updated each time a batch is processed, which means that it can be updated multiple times during one epoch. If batch_size is set equal to the length of x, then the model will be updated once per epoch.

\n", + "system": "" + }, + { + "instruction": "How to determine needed memory of Keras model?", + "input": "", + "output": "

I created a complete function based on the answer of Fabr\u00edcio Pereira.

\n
def get_model_memory_usage(batch_size, model):\n    import numpy as np\n    try:\n        from keras import backend as K\n    except:\n        from tensorflow.keras import backend as K\n\n    shapes_mem_count = 0\n    internal_model_mem_count = 0\n    for l in model.layers:\n        layer_type = l.__class__.__name__\n        if layer_type == 'Model':\n            internal_model_mem_count += get_model_memory_usage(batch_size, l)\n        single_layer_mem = 1\n        out_shape = l.output_shape\n        if type(out_shape) is list:\n            out_shape = out_shape[0]\n        for s in out_shape:\n            if s is None:\n                continue\n            single_layer_mem *= s\n        shapes_mem_count += single_layer_mem\n\n    trainable_count = np.sum([K.count_params(p) for p in model.trainable_weights])\n    non_trainable_count = np.sum([K.count_params(p) for p in model.non_trainable_weights])\n\n    number_size = 4.0\n    if K.floatx() == 'float16':\n        number_size = 2.0\n    if K.floatx() == 'float64':\n        number_size = 8.0\n\n    total_memory = number_size * (batch_size * shapes_mem_count + trainable_count + non_trainable_count)\n    gbytes = np.round(total_memory / (1024.0 ** 3), 3) + internal_model_mem_count\n    return gbytes\n
\n

UPDATE 2019.10.06: Added support for models which contain other models as layers.

\n

UPDATE 2020.07.17: Function now works correctly in TensorFlow v2.

\n", + "system": "" + }, + { + "instruction": "In Keras, how to get the layer name associated with a "Model" object contained in my model?", + "input": "", + "output": "

The key is to first do .get_layer on the Model object, then do another .get_layer on that specifying the specific vgg16 layer, THEN do .output:

\n\n

layer_output = model.get_layer('vgg16').get_layer('block3_conv1').output

\n", + "system": "" + }, + { + "instruction": "Feature Importance Chart in neural network using Keras in Python", + "input": "", + "output": "

I was recently looking for the answer to this question and found something that was useful for what I was doing and thought it would be helpful to share. I ended up using a permutation importance module from the eli5 package. It most easily works with a scikit-learn model. Luckily, Keras provides a wrapper for sequential models. As shown in the code below, using it is very straightforward.

\n\n
from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor\nimport eli5\nfrom eli5.sklearn import PermutationImportance\n\ndef base_model():\n    model = Sequential()        \n    ...\n    return model\n\nX = ...\ny = ...\n\nmy_model = KerasRegressor(build_fn=base_model, **sk_params)    \nmy_model.fit(X,y)\n\nperm = PermutationImportance(my_model, random_state=1).fit(X,y)\neli5.show_weights(perm, feature_names = X.columns.tolist())\n
\n", + "system": "" + }, + { + "instruction": "How can I print the values of Keras tensors?", + "input": "", + "output": "

Keras' backend has print_tensor which enables you to do this. You can use it this way:

\n\n
import keras.backend as K\n\ndef loss_fn(y_true, y_pred):\n    y_true = K.print_tensor(y_true, message='y_true = ')\n    y_pred = K.print_tensor(y_pred, message='y_pred = ')\n    ...\n
\n\n

The function returns an identical tensor. When that tensor is evaluated, it will print its content, preceded by message.\nFrom the Keras docs:

\n\n
\n

Note that print_tensor returns a new tensor identical to x which should be used in the following code. Otherwise the print operation is not taken into account during evaluation.

\n
\n\n

So, make sure to use the tensor afterwards.

\n", + "system": "" + }, + { + "instruction": "Keras not using multiple cores", + "input": "", + "output": "

Keras and TF themselves don't use whole cores and capacity of CPU! If you are interested in using all 100% of your CPU then the multiprocessing.Pool basically creates a pool of jobs that need doing. The processes will pick up these jobs and run them. When a job is finished, the process will pick up another job from the pool.

\n\n

NB: If you want to just speed up this model, look into GPUs or changing the hyperparameters like batch size and number of neurons (layer size).

\n\n

Here's how you can use multiprocessing to train multiple models at the same time (using processes running in parallel on each separate CPU core of your machine).

\n\n

This answer inspired by @repploved

\n\n
import time\nimport signal\nimport multiprocessing\n\ndef init_worker():\n    ''' Add KeyboardInterrupt exception to mutliprocessing workers '''\n    signal.signal(signal.SIGINT, signal.SIG_IGN)\n\n\ndef train_model(layer_size):\n    '''\n    This code is parallelized and runs on each process\n    It trains a model with different layer sizes (hyperparameters)\n    It saves the model and returns the score (error)\n    '''\n    import keras\n    from keras.models import Sequential\n    from keras.layers import Dense\n\n    print(f'Training a model with layer size {layer_size}')\n\n    # build your model here\n    model_RNN = Sequential()\n    model_RNN.add(Dense(layer_size))\n\n    # fit the model (the bit that takes time!)\n    model_RNN.fit(...)\n\n    # lets demonstrate with a sleep timer\n    time.sleep(5)\n\n    # save trained model to a file\n    model_RNN.save(...)\n\n    # you can also return values eg. the eval score\n    return model_RNN.evaluate(...)\n\n\nnum_workers = 4\nhyperparams = [800, 960, 1100]\n\npool = multiprocessing.Pool(num_workers, init_worker)\n\nscores = pool.map(train_model, hyperparams)\n\nprint(scores)\n
\n\n

Output:

\n\n
Training a model with layer size 800\nTraining a model with layer size 960\nTraining a model with layer size 1100\n[{'size':960,'score':1.0}, {'size':800,'score':1.2}, {'size':1100,'score':0.7}]\n
\n\n

This is easily demonstrated with a time.sleep in the code. You'll see that all 3 processes start the training job, and then they all finish at about the same time. If this was single processed, you'd have to wait for each to finish before starting the next (yawn!).

\n", + "system": "" + }, + { + "instruction": "How to understand SpatialDropout1D and when to use it?", + "input": "", + "output": "

To make it simple, I would first note that so-called feature maps (1D, 2D, etc.) is our regular channels. Let's look at examples:

\n\n
    \n
  1. Dropout(): Let's define 2D input: [[1, 1, 1], [2, 2, 2]]. Dropout will consider every element independently, and may result in something like [[1, 0, 1], [0, 2, 2]]

  2. \n
  3. SpatialDropout1D(): In this case result will look like [[1, 0, 1], [2, 0, 2]]. Notice that 2nd element was zeroed along all channels.

  4. \n
\n", + "system": "" + }, + { + "instruction": "Loading model with custom loss + keras", + "input": "", + "output": "

Yes, there is! custom_objects expects the exact function that you used as loss function (the inner one in your case):

\n\n
model = load_model(modelFile, custom_objects={ 'loss': penalized_loss(noise) })\n
\n\n

Unfortunately keras won't store in the model the value of noise, so you need to feed it to the load_model function manually.

\n", + "system": "" + }, + { + "instruction": "What is "metrics" in Keras?", + "input": "", + "output": "

So in order to understand what metrics are, it's good to start by understanding what a loss function is. Neural networks are mostly trained using gradient methods by an iterative process of decreasing a loss function.

\n\n

A loss is designed to have two crucial properties - first, the smaller its value is, the better your model fits your data, and second, it should be differentiable. So, knowing this, we could fully define what a metric is: it's a function that, given predicted values and ground truth values from examples, provides you with a scalar measure of a \"fitness\" of your model, to the data you have. So, as you may see, a loss function is a metric, but the opposite doesn't always hold. To understand these differences, let's look at the most common examples of metrics usage:

\n\n
    \n
  1. Measure a performance of your network using non-differentiable functions: e.g. accuracy is not differentiable (not even continuous) so you cannot directly optimize your network w.r.t. to it. However, you could use it in order to choose the model with the best accuracy.

  2. \n
  3. Obtain values of different loss functions when your final loss is a combination of a few of them: Let's assume that your loss has a regularization term which measures how your weights differ from 0, and a term which measures the fitness of your model. In this case, you could use metrics in order to have a separate track of how the fitness of your model changes across epochs.

  4. \n
  5. Track a measure with respect to which you don't want to directly optimize your model: so - let's assume that you are solving a multidimensional regression problem where you are mostly concerned about mse, but at the same time you are interested in how a cosine-distance of your solution is changing in time. Then, it's the best to use metrics.

  6. \n
\n\n

I hope that the explanation presented above made obvious what metrics are used for, and why you could use multiple metrics in one model. So now, let's say a few words about mechanics of their usage in keras. There are two ways of computing them while training:

\n\n
    \n
  1. Using metrics defined while compilation: this is what you directly asked. In this case, keras is defining a separate tensor for each metric you defined, to have it computed while training. This usually makes computation faster, but this comes at a cost of additional compilations, and the fact that metrics should be defined in terms of keras.backend functions.

  2. \n
  3. Using keras.callback: It is nice that you can use Callbacks in order to compute your metrics. As each callback has a default attribute of model, you could compute a variety of metrics using model.predict or model parameters while training. Moreover, it makes it possible to compute it, not only epoch-wise, but also batch-wise, or training-wise. This comes at a cost of slower computations, and more complicated logic - as you need to define metrics on your own.

  4. \n
\n\n

Here you can find a list of available metrics, as well as an example on how you could define your own.

\n", + "system": "" + }, + { + "instruction": "RMSE/ RMSLE loss function in Keras", + "input": "", + "output": "

When you use a custom loss, you need to put it without quotes, as you pass the function object, not a string:

\n\n
def root_mean_squared_error(y_true, y_pred):\n        return K.sqrt(K.mean(K.square(y_pred - y_true))) \n\nmodel.compile(optimizer = \"rmsprop\", loss = root_mean_squared_error, \n              metrics =[\"accuracy\"])\n
\n", + "system": "" + }, + { + "instruction": "Getting gradient of model output w.r.t weights using Keras", + "input": "", + "output": "

To get the gradients of model output with respect to weights using Keras you have to use the Keras backend module. I created this simple example to illustrate exactly what to do:

\n
from keras.models import Sequential\nfrom keras.layers import Dense, Activation\nfrom keras import backend as k\n\n\nmodel = Sequential()\nmodel.add(Dense(12, input_dim=8, init='uniform', activation='relu'))\nmodel.add(Dense(8, init='uniform', activation='relu'))\nmodel.add(Dense(1, init='uniform', activation='sigmoid'))\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n
\n

To calculate the gradients we first need to find the output tensor. For the output of the model (what my initial question asked) we simply call model.output. We can also find the gradients of outputs for other layers by calling model.layers[index].output

\n
outputTensor = model.output #Or model.layers[index].output\n
\n

Then we need to choose the variables that are in respect to the gradient.

\n
  listOfVariableTensors = model.trainable_weights\n  #or variableTensors = model.trainable_weights[0]\n
\n

We can now calculate the gradients. It is as easy as the following:

\n
gradients = k.gradients(outputTensor, listOfVariableTensors)\n
\n

To actually run the gradients given an input, we need to use a bit of Tensorflow.

\n
trainingExample = np.random.random((1,8))\nsess = tf.InteractiveSession()\nsess.run(tf.initialize_all_variables())\nevaluated_gradients = sess.run(gradients,feed_dict={model.input:trainingExample})\n
\n

And thats it!

\n", + "system": "" + }, + { + "instruction": "Running the Tensorflow 2.0 code gives 'ValueError: tf.function-decorated function tried to create variables on non-first call'. What am I doing wrong?", + "input": "", + "output": "

As you are trying to use function decorator in TF 2.0, please enable run function eagerly by using below line after importing TensorFlow:

\n
tf.config.experimental_run_functions_eagerly(True)\n
\n

Since the above is deprecated(no longer experimental?), please use the following instead:

\n
tf.config.run_functions_eagerly(True)\n\n
\n

If you want to know more do refer to this link.

\n", + "system": "" + }, + { + "instruction": "How do you use Keras LeakyReLU in Python?", + "input": "", + "output": "

All advanced activations in Keras, including LeakyReLU, are available as layers, and not as activations; therefore, you should use it as such:

\n\n
from keras.layers import LeakyReLU\n\n# instead of cnn_model.add(Activation('relu'))\n# use\ncnn_model.add(LeakyReLU(alpha=0.1))\n
\n", + "system": "" + }, + { + "instruction": "Batch normalization instead of input normalization", + "input": "", + "output": "

You can do it. But the nice thing about batchnorm, in addition to activation distribution stabilization, is that the mean and std deviation are likely migrate as the network learns.

\n\n

Effectively, setting the batchnorm right after the input layer is a fancy data pre-processing step. It helps, sometimes a lot (e.g. in linear regression). But it's easier and more efficient to compute the mean and variance of the whole training sample once, than learn it per-batch. Note that batchnorm isn't free in terms of performance and you shouldn't abuse it.

\n\n
\n", + "system": "" + }, + { + "instruction": "How to calculate optimal batch size?", + "input": "", + "output": "

From the recent Deep Learning book by Goodfellow et al., chapter 8:

\n
\n

Minibatch sizes are generally driven by the following factors:

\n\n
\n

Which in practice usually means "in powers of 2 and the larger the better, provided that the batch fits into your (GPU) memory".

\n

You might want also to consult several good posts here in Stack Exchange:

\n\n

Just keep in mind that the paper by Keskar et al. 'On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima', quoted by several of the posts above, has received some objections by other respectable researchers of the deep learning community.

\n

UPDATE (Dec 2017):

\n

There is a new paper by Yoshua Bengio & team, Three Factors Influencing Minima in SGD (Nov 2017); it is worth reading in the sense that it reports new theoretical & experimental results on the interplay between learning rate and batch size.

\n

UPDATE (Mar 2021):

\n

Of interest here is also another paper from 2018, Revisiting Small Batch Training for Deep Neural Networks (h/t to Nicolas Gervais), which runs contrary to the larger the better advice; quoting from the abstract:

\n
\n

The best performance has been consistently obtained for mini-batch sizes between m=2 and m=32, which contrasts with recent work advocating the use of mini-batch sizes in the thousands.

\n
\n", + "system": "" + }, + { + "instruction": "Sklearn StratifiedKFold: ValueError: Supported target types are: ('binary', 'multiclass'). Got 'multilabel-indicator' instead", + "input": "", + "output": "

keras.utils.to_categorical produces a one-hot encoded class vector, i.e. the multilabel-indicator mentioned in the error message. StratifiedKFold is not designed to work with such input; from the split method docs:

\n\n
\n

split(X, y, groups=None)

\n \n

[...]

\n \n

y : array-like, shape (n_samples,)

\n \n

The target variable for supervised learning problems. Stratification is done based on the y labels.

\n
\n\n

i.e. your y must be a 1-D array of your class labels.

\n\n

Essentially, what you have to do is simply to invert the order of the operations: split first (using your intial y_train), and convert to_categorical afterwards.

\n", + "system": "" + }, + { + "instruction": "Keras conv1d layer parameters: filters and kernel_size", + "input": "", + "output": "

You're right to say that kernel_size defines the size of the sliding window.

\n\n

The filters parameters is just how many different windows you will have. (All of them with the same length, which is kernel_size). How many different results or channels you want to produce.

\n\n

When you use filters=100 and kernel_size=4, you are creating 100 different filters, each of them with length 4. The result will bring 100 different convolutions.

\n\n

Also, each filter has enough parameters to consider all input channels.

\n\n
\n\n

The Conv1D layer expects these dimensions:

\n\n
(batchSize, length, channels)\n
\n\n

I suppose the best way to use it is to have the number of words in the length dimension (as if the words in order formed a sentence), and the channels be the output dimension of the embedding (numbers that define one word).

\n\n

So:

\n\n
batchSize = number of sentences    \nlength = number of words in each sentence   \nchannels = dimension of the embedding's output.    \n
\n\n

The convolutional layer will pass 100 different filters, each filter will slide along the length dimension (word by word, in groups of 4), considering all the channels that define the word.

\n\n

The outputs are shaped as:

\n\n
(number of sentences, 50 words, 100 output dimension or filters)   \n
\n\n

The filters are shaped as:

\n\n
(4 = length, 300 = word vector dimension, 100 output dimension of the convolution)  \n
\n", + "system": "" + }, + { + "instruction": "How to calculate F1 Macro in Keras?", + "input": "", + "output": "

since Keras 2.0 metrics f1, precision, and recall have been removed. The solution is to use a custom metric function:

\n\n
from keras import backend as K\n\ndef f1(y_true, y_pred):\n    def recall(y_true, y_pred):\n        """Recall metric.\n\n        Only computes a batch-wise average of recall.\n\n        Computes the recall, a metric for multi-label classification of\n        how many relevant items are selected.\n        """\n        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))\n        possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))\n        recall = true_positives / (possible_positives + K.epsilon())\n        return recall\n\n    def precision(y_true, y_pred):\n        """Precision metric.\n\n        Only computes a batch-wise average of precision.\n\n        Computes the precision, a metric for multi-label classification of\n        how many selected items are relevant.\n        """\n        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))\n        predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))\n        precision = true_positives / (predicted_positives + K.epsilon())\n        return precision\n    precision = precision(y_true, y_pred)\n    recall = recall(y_true, y_pred)\n    return 2*((precision*recall)/(precision+recall+K.epsilon()))\n\n\nmodel.compile(loss='binary_crossentropy',\n          optimizer= "adam",\n          metrics=[f1])\n
\n

The return line of this function

\n
return 2*((precision*recall)/(precision+recall+K.epsilon()))\n
\n

was modified by adding the constant epsilon, in order to avoid division by 0. Thus NaN will not be computed.

\n", + "system": "" + }, + { + "instruction": "how to implement custom metric in keras?", + "input": "", + "output": "

Here I'm answering to OP's topic question rather than his exact problem. I'm doing this as the question shows up in the top when I google the topic problem.

\n\n

You can implement a custom metric in two ways.

\n\n
    \n
  1. As mentioned in Keras docu.\n

    \n\n
    import keras.backend as K\n\ndef mean_pred(y_true, y_pred):\n    return K.mean(y_pred)\n\nmodel.compile(optimizer='sgd',\n          loss='binary_crossentropy',\n          metrics=['accuracy', mean_pred])\n
    \n\n

    But here you have to remember as mentioned in Marcin Mo\u017cejko's answer that y_true and y_pred are tensors. So in order to correctly calculate the metric you need to use keras.backend functionality. Please look at this SO question for details How to calculate F1 Macro in Keras?

  2. \n
  3. Or you can implement it in a hacky way as mentioned in Keras GH issue. For that you need to use callbacks argument of model.fit.\n

    \n\n
    import keras as keras\nimport numpy as np\nfrom keras.optimizers import SGD\nfrom sklearn.metrics import roc_auc_score\n\nmodel = keras.models.Sequential()\n# ...\nsgd = SGD(lr=0.001, momentum=0.9)\nmodel.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])\n\n\nclass Metrics(keras.callbacks.Callback):\n    def on_train_begin(self, logs={}):\n        self._data = []\n\n    def on_epoch_end(self, batch, logs={}):\n        X_val, y_val = self.validation_data[0], self.validation_data[1]\n        y_predict = np.asarray(model.predict(X_val))\n\n        y_val = np.argmax(y_val, axis=1)\n        y_predict = np.argmax(y_predict, axis=1)\n\n        self._data.append({\n            'val_rocauc': roc_auc_score(y_val, y_predict),\n        })\n        return\n\n    def get_data(self):\n        return self._data\n\nmetrics = Metrics()\nhistory = model.fit(X_train, y_train, epochs=100, validation_data=(X_val, y_val), callbacks=[metrics])\nmetrics.get_data()\n
  4. \n
\n", + "system": "" + }, + { + "instruction": "How to insert Keras model into scikit-learn pipeline?", + "input": "", + "output": "

You need to wrap your Keras model as a Scikit learn model first and then proceed as usual.

\n

Here's a quick example (I've omitted the imports for brevity)

\n
\n

Here is a full blog post with this one and many other examples: Scikit-learn Pipeline Examples

\n
\n
# create a function that returns a model, taking as parameters things you\n# want to verify using cross-valdiation and model selection\ndef create_model(optimizer='adagrad',\n                  kernel_initializer='glorot_uniform', \n                  dropout=0.2):\n    model = Sequential()\n    model.add(Dense(64,activation='relu',kernel_initializer=kernel_initializer))\n    model.add(Dropout(dropout))\n    model.add(Dense(1,activation='sigmoid',kernel_initializer=kernel_initializer))\n\n    model.compile(loss='binary_crossentropy',optimizer=optimizer, metrics=['accuracy'])\n\n    return model\n\n# wrap the model using the function you created\nclf = KerasRegressor(build_fn=create_model,verbose=0)\n\n# just create the pipeline\npipeline = Pipeline([\n    ('clf',clf)\n])\n\npipeline.fit(X_train, y_train)\n
\n", + "system": "" + }, + { + "instruction": "Can Keras deal with input images with different size?", + "input": "", + "output": "

Yes.\nJust change your input shape to shape=(n_channels, None, None).\nWhere n_channels is the number of channels in your input image.

\n\n

I'm using Theano backend though, so if you are using tensorflow you might have to change it to (None,None,n_channels)

\n\n
\n

You should use:

\n \n

input_shape=(1, None, None)

\n \n

None in a shape denotes a variable dimension. Note that not all layers\n will work with such variable dimensions, since some layers require\n shape information (such as Flatten).\n https://github.com/fchollet/keras/issues/1920

\n
\n\n

For example, using keras's functional API your input layer would be:

\n\n

For a RGB dataset

\n\n
inp = Input(shape=(3,None,None))\n
\n\n

For a Gray dataset

\n\n
inp = Input(shape=(1,None,None))\n
\n", + "system": "" + }, + { + "instruction": "How to tell which Keras model is better?", + "input": "", + "output": "
\n

Do I use the \"acc\" (from the training data?) one or the \"val acc\" (from the validation data?) one?

\n
\n\n

If you want to estimate the ability of your model to generalize to new data (which is probably what you want to do), then you look at the validation accuracy, because the validation split contains only data that the model never sees during the training and therefor cannot just memorize.

\n\n

If your training data accuracy (\"acc\") keeps improving while your validation data accuracy (\"val_acc\") gets worse, you are likely in an overfitting situation, i.e. your model starts to basically just memorize the data.

\n\n
\n

There are different accs and val accs for each epoch. How do I know the acc or val acc for my model as a whole? Do I average all of the epochs accs or val accs to find the acc or val acc of the model as a whole?

\n
\n\n

Each epoch is a training run over all of your data. During that run the parameters of your model are adjusted according to your loss function. The result is a set of parameters which have a certain ability to generalize to new data. That ability is reflected by the validation accuracy. So think of every epoch as its own model, which can get better or worse if it is trained for another epoch. Whether it got better or worse is judged by the change in validation accuracy (better = validation accuracy increased). Therefore pick the model of the epoch with the highest validation accuracy. Don't average the accuracies over different epochs, that wouldn't make much sense. You can use the Keras callback ModelCheckpoint to automatically save the model with the highest validation accuracy (see callbacks documentation).

\n\n

The highest accuracy in model 1 is 0.7737 and the highest one in model 2 is 0.7572. Therefore you should view model 1 (at epoch 3) as better. Though it is possible that the 0.7737 was just a random outlier.

\n", + "system": "" + }, + { + "instruction": "Shuffling training data with LSTM RNN", + "input": "", + "output": "

In general, when you shuffle the training data (a set of sequences), you shuffle the order in which sequences are fed to the RNN, you don't shuffle the ordering within individual sequences. This is fine to do when your network is stateless:

\n\n

Stateless Case:

\n\n

The network's memory only persists for the duration of a sequence. Training on sequence B before sequence A doesn't matter because the network's memory state does not persist across sequences.

\n\n

On the other hand:

\n\n

Stateful Case:

\n\n

The network's memory persists across sequences. Here, you cannot blindly shuffle your data and expect optimal results. Sequence A should be fed to the network before sequence B because A comes before B, and we want the network to evaluate sequence B with memory of what was in sequence A.

\n", + "system": "" + }, + { + "instruction": "How to calculate prediction uncertainty using Keras?", + "input": "", + "output": "

If you want to implement dropout approach to measure uncertainty you should do the following:

\n\n
    \n
  1. Implement function which applies dropout also during the test time:

    \n\n
    import keras.backend as K\nf = K.function([model.layers[0].input, K.learning_phase()],\n               [model.layers[-1].output])\n
  2. \n
  3. Use this function as uncertainty predictor e.g. in a following manner:

    \n\n
    def predict_with_uncertainty(f, x, n_iter=10):\n    result = numpy.zeros((n_iter,) + x.shape)\n\n    for iter in range(n_iter):\n        result[iter] = f(x, 1)\n\n    prediction = result.mean(axis=0)\n    uncertainty = result.var(axis=0)\n    return prediction, uncertainty\n
  4. \n
\n\n

Of course you may use any different function to compute uncertainty.

\n", + "system": "" + }, + { + "instruction": "Make a deep copy of a keras model in python", + "input": "", + "output": "

The issue is that model_copy is probably not compiled after cloning. There are in fact a few issues:

\n\n
    \n
  1. Apparently cloning doesn't copy over the loss function, optimizer info, etc.

  2. \n
  3. Before compiling you need to also build the model.

  4. \n
  5. Moreover, cloning doesn't copy weight over

  6. \n
\n\n

So you need a couple extra lines after cloning. For example, for 10 input variables:

\n\n
model_copy= keras.models.clone_model(model1)\nmodel_copy.build((None, 10)) # replace 10 with number of variables in input layer\nmodel_copy.compile(optimizer='rmsprop', loss='categorical_crossentropy')\nmodel_copy.set_weights(model.get_weights())\n\n
\n\n
\n\n

Easier Method 1: Loading weights from file

\n\n

If I understand your question correctly, there is an easier way to do this. You don't need to clone the model, just need to save the old_weights and set the weights at beginning of the loop. You can simply load weights from file as you are doing.

\n\n
for _ in range(10):\n    model1= create_Model()\n    model1.compile(optimizer='rmsprop', loss='categorical_crossentropy')\n    model1.load_weights('my_weights')\n\n    for j in range(0, image_size):\n          model1.fit(sample[j], sample_lbl[j])\n          prediction= model1.predict(sample[j])\n
\n\n
\n\n

Easier Method 2: Loading weights from previous get_weights()

\n\n

Or if you prefer not to load from file:

\n\n
model1= create_Model()\nmodel1.compile(optimizer='rmsprop', loss='categorical_crossentropy')\nmodel1.load_weights('my_weights')\nold_weights = model1.get_weights()\n\nfor _ in range(10):\n    model1.set_weights(old_weights)\n    for j in range(0, image_size):\n          model1.fit(sample[j], sample_lbl[j])\n          prediction= model1.predict(sample[j])\n
\n", + "system": "" + }, + { + "instruction": "Get Confusion Matrix From a Keras Multiclass Model", + "input": "", + "output": "

Your input to confusion_matrix must be an array of int not one hot encodings.

\n\n
matrix = metrics.confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1))\n
\n", + "system": "" + }, + { + "instruction": "Restore original text from Keras\u2019s imdb dataset", + "input": "", + "output": "

Your example is coming out as gibberish, it's much worse than just some missing stop words.

\n\n

If you re-read the docs for the start_char, oov_char, and index_from parameters of the [keras.datasets.imdb.load_data](https://keras.io/datasets/#imdb-movie-reviews-sentiment-classification\n) method they explain what is happening:

\n\n

start_char: int. The start of a sequence will be marked with this character. Set to 1 because 0 is usually the padding character.

\n\n

oov_char: int. words that were cut out because of the num_words or skip_top limit will be replaced with this character.

\n\n

index_from: int. Index actual words with this index and higher.

\n\n

That dictionary you inverted assumes the word indices start from 1.

\n\n

But the indices returned my keras have <START> and <UNKNOWN> as indexes 1 and 2. (And it assumes you will use 0 for <PADDING>).

\n\n

This works for me:

\n\n
import keras\nNUM_WORDS=1000 # only use top 1000 words\nINDEX_FROM=3   # word index offset\n\ntrain,test = keras.datasets.imdb.load_data(num_words=NUM_WORDS, index_from=INDEX_FROM)\ntrain_x,train_y = train\ntest_x,test_y = test\n\nword_to_id = keras.datasets.imdb.get_word_index()\nword_to_id = {k:(v+INDEX_FROM) for k,v in word_to_id.items()}\nword_to_id[\"<PAD>\"] = 0\nword_to_id[\"<START>\"] = 1\nword_to_id[\"<UNK>\"] = 2\nword_to_id[\"<UNUSED>\"] = 3\n\nid_to_word = {value:key for key,value in word_to_id.items()}\nprint(' '.join(id_to_word[id] for id in train_x[0] ))\n
\n\n

The punctuation is missing, but that's all:

\n\n
\"<START> this film was just brilliant casting <UNK> <UNK> story\n direction <UNK> really <UNK> the part they played and you could just\n imagine being there robert <UNK> is an amazing actor ...\"\n
\n", + "system": "" + }, + { + "instruction": "Get learning rate of keras model", + "input": "", + "output": "

Use eval() from keras.backend:

\n\n
import keras.backend as K\nfrom keras.models import Sequential\nfrom keras.layers import Dense\n\nmodel = Sequential()\nmodel.add(Dense(1, input_shape=(1,)))\nmodel.add(Dense(1))\nmodel.compile(loss='mse', optimizer='adam')\n\nprint(K.eval(model.optimizer.lr))\n
\n\n

Output:

\n\n
0.001\n
\n", + "system": "" + }, + { + "instruction": "Keras: model.predict for a single image", + "input": "", + "output": "

Since you trained your model on mini-batches, your input is a tensor of shape [batch_size, image_width, image_height, number_of_channels].

\n\n

When predicting, you have to respect this shape even if you have only one image. Your input should be of shape: [1, image_width, image_height, number_of_channels].

\n\n

You can do this in numpy easily. Let's say you have a single 5x5x3 image:

\n\n
    >>> x = np.random.randint(0,10,(5,5,3))\n    >>> x.shape\n    >>> (5, 5, 3)\n    >>> x = np.expand_dims(x, axis=0)\n    >>> x.shape\n    >>> (1, 5, 5, 3)\n
\n\n

Now x is a rank 4 tensor!

\n", + "system": "" + }, + { + "instruction": "Keras: Binary_crossentropy has negative values", + "input": "", + "output": "

I should have printed out my response variable. The categories were labelled as 1 and 2 instead of 0 and 1, which confused the classifier.

\n", + "system": "" + }, + { + "instruction": "Keras: weighted binary crossentropy", + "input": "", + "output": "

You can use the sklearn module to automatically calculate the weights for each class like this:

\n\n
# Import\nimport numpy as np\nfrom sklearn.utils import class_weight\n\n# Example model\nmodel = Sequential()\nmodel.add(Dense(32, activation='relu', input_dim=100))\nmodel.add(Dense(1, activation='sigmoid'))\n\n# Use binary crossentropy loss\nmodel.compile(optimizer='rmsprop',\n              loss='binary_crossentropy',\n              metrics=['accuracy'])\n\n# Calculate the weights for each class so that we can balance the data\nweights = class_weight.compute_class_weight('balanced',\n                                            np.unique(y_train),\n                                            y_train)\n\n# Add the class weights to the training                                         \nmodel.fit(x_train, y_train, epochs=10, batch_size=32, class_weight=weights)\n
\n\n

Note that the output of the class_weight.compute_class_weight() is an numpy array like this: [2.57569845 0.68250928].

\n", + "system": "" + }, + { + "instruction": "How to save model.summary() to file in Keras?", + "input": "", + "output": "

If you want the formatting of summary you can pass a print function to model.summary() and output to file that way:

\n
def myprint(s):\n    with open('modelsummary.txt','a') as f:\n        print(s, file=f)\n\nmodel.summary(print_fn=myprint)\n
\n

Alternatively, you can serialize it to a json or yaml string with model.to_json() or model.to_yaml() which can be imported back later.

\n

Edit

\n

An more pythonic way to do this in Python 3.4+ is to use contextlib.redirect_stdout

\n
from contextlib import redirect_stdout\n\nwith open('modelsummary.txt', 'w') as f:\n    with redirect_stdout(f):\n        model.summary()\n
\n", + "system": "" + }, + { + "instruction": "ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4", + "input": "", + "output": "

I solved the problem by making

\n\n
\n

input size: (95000,360,1) and\n output size: (95000,22)

\n
\n\n

and changed the input shape to (360,1) in the code where model is defined:

\n\n
model = Sequential()\nmodel.add(LSTM(22, input_shape=(360,1)))\nmodel.add(Dense(22, activation='softmax'))\nmodel.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\nprint(model.summary())\nmodel.fit(ml2_train_input, ml2_train_output_enc, epochs=2, batch_size=500)\n
\n", + "system": "" + }, + { + "instruction": "balancing an imbalanced dataset with keras image generator", + "input": "", + "output": "

This would not be a standard approach to deal with unbalanced data. Nor do I think it would be really justified - you would be significantly changing the distributions of your classes, where the smaller class is now much less variable. The larger class would have rich variation, the smaller would be many similar images with small affine transforms. They would live on a much smaller region in image space than the majority class.

\n\n

The more standard approaches would be:

\n\n\n\n

The first two options are really kind of hacks, which may harm your ability to cope with real world (imbalanced) data. Neither really solves the problem of low variability, which is inherent in having too little data. If application to a real world dataset after model training isn't a concern and you just want good results on the data you have, then these options are fine (and much easier than making generators for a single class).

\n\n

The third option is the right way to go if you have enough data (as an example, the recent paper from Google about detecting diabetic retinopathy achieved high accuracy in a dataset where positive cases were between 10% and 30%).

\n\n

If you truly want to generate a variety of augmented images for one class over another, it would probably be easiest to do it in pre-processing. Take the images of the minority class and generate some augmented versions, and just call it all part of your data. Like I say, this is all pretty hacky.

\n", + "system": "" + }, + { + "instruction": "Neural Network LSTM input shape from dataframe", + "input": "", + "output": "

Below is an example that sets up time series data to train an LSTM. The model output is nonsense as I only set it up to demonstrate how to build the model.

\n\n
import pandas as pd\nimport numpy as np\n# Get some time series data\ndf = pd.read_csv(\"https://raw.githubusercontent.com/plotly/datasets/master/timeseries.csv\")\ndf.head()\n
\n\n

Time series dataframe:

\n\n
Date      A       B       C      D      E      F      G\n0   2008-03-18  24.68  164.93  114.73  26.27  19.21  28.87  63.44\n1   2008-03-19  24.18  164.89  114.75  26.22  19.07  27.76  59.98\n2   2008-03-20  23.99  164.63  115.04  25.78  19.01  27.04  59.61\n3   2008-03-25  24.14  163.92  114.85  27.41  19.61  27.84  59.41\n4   2008-03-26  24.44  163.45  114.84  26.86  19.53  28.02  60.09\n
\n\n

You can build put inputs into a vector and then use pandas .cumsum() function to build the sequence for the time series:

\n\n
# Put your inputs into a single list\ndf['single_input_vector'] = df[input_cols].apply(tuple, axis=1).apply(list)\n# Double-encapsulate list so that you can sum it in the next step and keep time steps as separate elements\ndf['single_input_vector'] = df.single_input_vector.apply(lambda x: [list(x)])\n# Use .cumsum() to include previous row vectors in the current row list of vectors\ndf['cumulative_input_vectors'] = df.single_input_vector.cumsum()\n
\n\n

The output can be set up in a similar way, but it will be a single vector instead of a sequence:

\n\n
# If your output is multi-dimensional, you need to capture those dimensions in one object\n# If your output is a single dimension, this step may be unnecessary\ndf['output_vector'] = df[output_cols].apply(tuple, axis=1).apply(list)\n
\n\n

The input sequences have to be the same length to run them through the model, so you need to pad them to be the max length of your cumulative vectors:

\n\n
# Pad your sequences so they are the same length\nfrom keras.preprocessing.sequence import pad_sequences\n\nmax_sequence_length = df.cumulative_input_vectors.apply(len).max()\n# Save it as a list   \npadded_sequences = pad_sequences(df.cumulative_input_vectors.tolist(), max_sequence_length).tolist()\ndf['padded_input_vectors'] = pd.Series(padded_sequences).apply(np.asarray)\n
\n\n

Training data can be pulled from the dataframe and put into numpy arrays. Note that the input data that comes out of the dataframe will not make a 3D array. It makes an array of arrays, which is not the same thing.

\n\n

You can use hstack and reshape to build a 3D input array.

\n\n
# Extract your training data\nX_train_init = np.asarray(df.padded_input_vectors)\n# Use hstack to and reshape to make the inputs a 3d vector\nX_train = np.hstack(X_train_init).reshape(len(df),max_sequence_length,len(input_cols))\ny_train = np.hstack(np.asarray(df.output_vector)).reshape(len(df),len(output_cols))\n
\n\n

To prove it:

\n\n
>>> print(X_train_init.shape)\n(11,)\n>>> print(X_train.shape)\n(11, 11, 6)\n>>> print(X_train == X_train_init)\nFalse\n
\n\n

Once you have training data you can define the dimensions of your input layer and output layers.

\n\n
# Get your input dimensions\n# Input length is the length for one input sequence (i.e. the number of rows for your sample)\n# Input dim is the number of dimensions in one input vector (i.e. number of input columns)\ninput_length = X_train.shape[1]\ninput_dim = X_train.shape[2]\n# Output dimensions is the shape of a single output vector\n# In this case it's just 1, but it could be more\noutput_dim = len(y_train[0])\n
\n\n

Build the model:

\n\n
from keras.models import Model, Sequential\nfrom keras.layers import LSTM, Dense\n\n# Build the model\nmodel = Sequential()\n\n# I arbitrarily picked the output dimensions as 4\nmodel.add(LSTM(4, input_dim = input_dim, input_length = input_length))\n# The max output value is > 1 so relu is used as final activation.\nmodel.add(Dense(output_dim, activation='relu'))\n\nmodel.compile(loss='mean_squared_error',\n              optimizer='sgd',\n              metrics=['accuracy'])\n
\n\n

Finally you can train the model and save the training log as history:

\n\n
# Set batch_size to 7 to show that it doesn't have to be a factor or multiple of your sample size\nhistory = model.fit(X_train, y_train,\n              batch_size=7, nb_epoch=3,\n              verbose = 1)\n
\n\n

Output:

\n\n
Epoch 1/3\n11/11 [==============================] - 0s - loss: 3498.5756 - acc: 0.0000e+00     \nEpoch 2/3\n11/11 [==============================] - 0s - loss: 3498.5755 - acc: 0.0000e+00     \nEpoch 3/3\n11/11 [==============================] - 0s - loss: 3498.5757 - acc: 0.0000e+00 \n
\n\n

That's it. Use model.predict(X) where X is the same format (other than the number of samples) as X_train in order to make predictions from the model.

\n", + "system": "" + }, + { + "instruction": "keras BatchNormalization axis clarification", + "input": "", + "output": "

The confusion is due to the meaning of axis in np.mean versus in BatchNormalization.

\n\n

When we take the mean along an axis, we collapse that dimension and preserve all other dimensions. In your example data.mean(axis=0) collapses the 0-axis, which is the vertical dimension of data.

\n\n

When we compute a BatchNormalization along an axis, we preserve the dimensions of the array, and we normalize with respect to the mean and standard deviation over every other axis. So in your 2D example BatchNormalization with axis=1 is subtracting the mean for axis=0, just as you expect. This is why bn.moving_mean has shape (4,).

\n", + "system": "" + }, + { + "instruction": "How do I mask a loss function in Keras with the TensorFlow backend?", + "input": "", + "output": "

If there's a mask in your model, it'll be propagated layer-by-layer and eventually applied to the loss. So if you're padding and masking the sequences in a correct way, the loss on the padding placeholders would be ignored.

\n

Some Details:

\n

It's a bit involved to explain the whole process, so I'll just break it down to several steps:

\n
    \n
  1. In compile(), the mask is collected by calling compute_mask() and applied to the loss(es) (irrelevant lines are ignored for clarity).
  2. \n
\n
weighted_losses = [_weighted_masked_objective(fn) for fn in loss_functions]\n\n# Prepare output masks.\nmasks = self.compute_mask(self.inputs, mask=None)\nif masks is None:\n    masks = [None for _ in self.outputs]\nif not isinstance(masks, list):\n    masks = [masks]\n\n# Compute total loss.\ntotal_loss = None\nwith K.name_scope('loss'):\n    for i in range(len(self.outputs)):\n        y_true = self.targets[i]\n        y_pred = self.outputs[i]\n        weighted_loss = weighted_losses[i]\n        sample_weight = sample_weights[i]\n        mask = masks[i]\n        with K.name_scope(self.output_names[i] + '_loss'):\n            output_loss = weighted_loss(y_true, y_pred,\n                                        sample_weight, mask)\n
\n
    \n
  1. Inside Model.compute_mask(), run_internal_graph() is called.
  2. \n
  3. Inside run_internal_graph(), the masks in the model is propagated layer-by-layer from the model's inputs to outputs by calling Layer.compute_mask() for each layer iteratively.
  4. \n
\n

So if you're using a Masking layer in your model, you shouldn't worry about the loss on the padding placeholders. The loss on those entries will be masked out as you've probably already seen inside _weighted_masked_objective().

\n

A Small Example:

\n
max_sentence_length = 5\ncharacter_number = 2\n\ninput_tensor = Input(shape=(max_sentence_length, character_number))\nmasked_input = Masking(mask_value=0)(input_tensor)\noutput = LSTM(3, return_sequences=True)(masked_input)\nmodel = Model(input_tensor, output)\nmodel.compile(loss='mae', optimizer='adam')\n\nX = np.array([[[0, 0], [0, 0], [1, 0], [0, 1], [0, 1]],\n              [[0, 0], [0, 1], [1, 0], [0, 1], [0, 1]]])\ny_true = np.ones((2, max_sentence_length, 3))\ny_pred = model.predict(X)\nprint(y_pred)\n[[[ 0.          0.          0.        ]\n  [ 0.          0.          0.        ]\n  [-0.11980877  0.05803877  0.07880752]\n  [-0.00429189  0.13382857  0.19167568]\n  [ 0.06817091  0.19093043  0.26219055]]\n\n [[ 0.          0.          0.        ]\n  [ 0.0651961   0.10283815  0.12413475]\n  [-0.04420842  0.137494    0.13727818]\n  [ 0.04479844  0.17440712  0.24715884]\n  [ 0.11117355  0.21645413  0.30220413]]]\n\n# See if the loss computed by model.evaluate() is equal to the masked loss\nunmasked_loss = np.abs(1 - y_pred).mean()\nmasked_loss = np.abs(1 - y_pred[y_pred != 0]).mean()\n\nprint(model.evaluate(X, y_true))\n0.881977558136\n\nprint(masked_loss)\n0.881978\n\nprint(unmasked_loss)\n0.917384\n
\n

As can be seen from this example, the loss on the masked part (the zeroes in y_pred) is ignored, and the output of model.evaluate() is equal to masked_loss.

\n
\n

EDIT:

\n

If there's a recurrent layer with return_sequences=False, the mask stop propagates (i.e., the returned mask is None). In RNN.compute_mask():

\n
def compute_mask(self, inputs, mask):\n    if isinstance(mask, list):\n        mask = mask[0]\n    output_mask = mask if self.return_sequences else None\n    if self.return_state:\n        state_mask = [None for _ in self.states]\n        return [output_mask] + state_mask\n    else:\n        return output_mask\n
\n

In your case, if I understand correctly, you want a mask that's based on y_true, and whenever the value of y_true is [0, 0, 1] (the one-hot encoding of "#") you want the loss to be masked. If so, you need to mask the loss values in a somewhat similar way to Daniel's answer.

\n

The main difference is the final average. The average should be taken over the number of unmasked values, which is just K.sum(mask). And also, y_true can be compared to the one-hot encoded vector [0, 0, 1] directly.

\n
def get_loss(mask_value):\n    mask_value = K.variable(mask_value)\n    def masked_categorical_crossentropy(y_true, y_pred):\n        # find out which timesteps in `y_true` are not the padding character '#'\n        mask = K.all(K.equal(y_true, mask_value), axis=-1)\n        mask = 1 - K.cast(mask, K.floatx())\n\n        # multiply categorical_crossentropy with the mask\n        loss = K.categorical_crossentropy(y_true, y_pred) * mask\n\n        # take average w.r.t. the number of unmasked entries\n        return K.sum(loss) / K.sum(mask)\n    return masked_categorical_crossentropy\n\nmasked_categorical_crossentropy = get_loss(np.array([0, 0, 1]))\nmodel = Model(input_tensor, output)\nmodel.compile(loss=masked_categorical_crossentropy, optimizer='adam')\n
\n

The output of the above code then shows that the loss is computed only on the unmasked values:

\n
model.evaluate: 1.08339476585\ntf unmasked_loss: 1.08989\ntf masked_loss: 1.08339\n
\n

The value is different from yours because I've changed the axis argument in tf.reverse from [0,1] to [1].

\n", + "system": "" + }, + { + "instruction": "Keras rename model and layers", + "input": "", + "output": "

For changing names of model.layers with tf.keras you can use the following lines:

\n\n
for layer in model.layers:\n    layer._name = layer.name + str(\"_2\")\n
\n\n

I needed this in a two-input model case and ran into the \"AttributeError: can't set attribute\", too. The thing is that there is an underlying hidden attribute _name, which causes the conflict.

\n", + "system": "" + }, + { + "instruction": "How to work with multiple inputs for LSTM in Keras?", + "input": "", + "output": "

Change

\n\n
a = dataset[i:(i + look_back), 0]\n
\n\n

To

\n\n
a = dataset[i:(i + look_back), :]\n
\n\n

If you want the 3 features in your training data.

\n\n

Then use

\n\n
model.add(LSTM(4, input_shape=(look_back,3)))\n
\n\n

To specify that you have look_back time steps in your sequence, each with 3 features.

\n\n

It should run

\n\n

EDIT :

\n\n

Indeed, sklearn.preprocessing.MinMaxScaler()'s function : inverse_transform() takes an input which has the same shape as the object you fitted. So you need to do something like this :

\n\n
# Get something which has as many features as dataset\ntrainPredict_extended = np.zeros((len(trainPredict),3))\n# Put the predictions there\ntrainPredict_extended[:,2] = trainPredict\n# Inverse transform it and select the 3rd column.\ntrainPredict = scaler.inverse_transform(trainPredict_extended)[:,2]\n
\n\n

I guess you will have other issues like this below in your code but nothing that you can't fix :) the ML part is fixed and you know where the error comes from. Just check the shapes of your objects and try to make them match.

\n", + "system": "" + }, + { + "instruction": "Understanding Keras LSTMs: Role of Batch-size and Statefulness", + "input": "", + "output": "

Let me explain it via an example:

\n\n

So let's say you have the following series: 1,2,3,4,5,6,...,100. You have to decide how many timesteps your lstm will learn, and reshape your data as so. Like below:

\n\n

if you decide time_steps = 5, you have to reshape your time series as a matrix of samples in this way:

\n\n
\n

1,2,3,4,5 -> sample1

\n \n

2,3,4,5,6 -> sample2

\n \n

3,4,5,6,7 -> sample3

\n \n

etc...

\n
\n\n

By doing so, you will end with a matrix of shape (96 samples x 5 timesteps)

\n\n

This matrix should be reshape as (96 x 5 x 1) indicating Keras that you have just 1 time series. If you have more time series in parallel (as in your case), you do the same operation on each time series, so you will end with n matrices (one for each time series) each of shape (96 sample x 5 timesteps).

\n\n

For the sake of argument, let's say you 3 time series. You should concat all of three matrices into one single tensor of shape (96 samples x 5 timeSteps x 3 timeSeries). The first layer of your lstm for this example would be:

\n\n
    model = Sequential()\n    model.add(LSTM(32, input_shape=(5, 3)))\n
\n\n

The 32 as first parameter is totally up to you. It means that at each point in time, your 3 time series will become 32 different variables as output space. It is easier to think each time step as a fully conected layer with 3 inputs and 32 outputs but with a different computation than FC layers.

\n\n

If you are about stacking multiple lstm layers, use return_sequences=True parameter, so the layer will output the whole predicted sequence rather than just the last value.

\n\n

your target shoud be the next value in the series you want to predict.

\n\n

Putting all together, let say you have the following time series:

\n\n

Time series 1 (master): 1,2,3,4,5,6,..., 100

\n\n

Time series 2 (support): 2,4,6,8,10,12,..., 200

\n\n

Time series 3 (support): 3,6,9,12,15,18,..., 300

\n\n

Create the input and target tensor

\n\n
\n
x     -> y\n
\n \n

1,2,3,4,5 -> 6

\n \n

2,3,4,5,6 -> 7

\n \n

3,4,5,6,7 -> 8

\n \n

reformat the rest of time series, but forget about the target since you don't want to predict those series

\n
\n\n

Create your model

\n\n
    model = Sequential()\n    model.add(LSTM(32, input_shape=(5, 3), return_sequences=True)) # Input is shape (5 timesteps x 3 timeseries), output is shape (5 timesteps x 32 variables) because return_sequences  = True\n    model.add(LSTM(8))  # output is shape (1 timesteps x 8 variables) because return_sequences = False\n    model.add(Dense(1, activation='linear')) # output is (1 timestep x 1 output unit on dense layer). It is compare to target variable.\n
\n\n

Compile it and train. A good batch size is 32. Batch size is the size your sample matrices are splited for faster computation. Just don't use statefull

\n", + "system": "" + }, + { + "instruction": "How does mask_zero in Keras Embedding layer work?", + "input": "", + "output": "

Actually, setting mask_zero=True for the Embedding layer does not result in returning a zero vector. Rather, the behavior of the Embedding layer would not change and it would return the embedding vector with index zero. You can confirm this by checking the Embedding layer weights (i.e. in the example you mentioned it would be m.layers[0].get_weights()). Instead, it would affect the behavior of the following layers such as RNN layers.

\n\n

If you inspect the source code of Embedding layer you would see a method called compute_mask:

\n\n
def compute_mask(self, inputs, mask=None):\n    if not self.mask_zero:\n        return None\n    output_mask = K.not_equal(inputs, 0)\n    return output_mask\n
\n\n

This output mask will be passed, as the mask argument, to the following layers which support masking. This has been implemented in the __call__ method of base layer, Layer:

\n\n
# Handle mask propagation.\nprevious_mask = _collect_previous_mask(inputs)\nuser_kwargs = copy.copy(kwargs)\nif not is_all_none(previous_mask):\n    # The previous layer generated a mask.\n    if has_arg(self.call, 'mask'):\n        if 'mask' not in kwargs:\n            # If mask is explicitly passed to __call__,\n            # we should override the default mask.\n            kwargs['mask'] = previous_mask\n
\n\n

And this makes the following layers to ignore (i.e. does not consider in their computations) this inputs steps. Here is a minimal example:

\n\n
data_in = np.array([\n  [1, 0, 2, 0]\n])\n\nx = Input(shape=(4,))\ne = Embedding(5, 5, mask_zero=True)(x)\nrnn = LSTM(3, return_sequences=True)(e)\n\nm = Model(inputs=x, outputs=rnn)\nm.predict(data_in)\n\narray([[[-0.00084503, -0.00413611,  0.00049972],\n        [-0.00084503, -0.00413611,  0.00049972],\n        [-0.00144554, -0.00115775, -0.00293898],\n        [-0.00144554, -0.00115775, -0.00293898]]], dtype=float32)\n
\n\n

As you can see the outputs of the LSTM layer for the second and forth timesteps are the same as the output of first and third timesteps, respectively. This means that those timesteps have been masked.

\n\n

Update: The mask will also be considered when computing the loss since the loss functions are internally augmented to support masking using weighted_masked_objective:

\n\n
def weighted_masked_objective(fn):\n    \"\"\"Adds support for masking and sample-weighting to an objective function.\n    It transforms an objective function `fn(y_true, y_pred)`\n    into a sample-weighted, cost-masked objective function\n    `fn(y_true, y_pred, weights, mask)`.\n    # Arguments\n        fn: The objective function to wrap,\n            with signature `fn(y_true, y_pred)`.\n    # Returns\n        A function with signature `fn(y_true, y_pred, weights, mask)`.\n    \"\"\"\n
\n\n

when compiling the model:

\n\n
weighted_losses = [weighted_masked_objective(fn) for fn in loss_functions]\n
\n\n

You can verify this using the following example:

\n\n
data_in = np.array([[1, 2, 0, 0]])\ndata_out = np.arange(12).reshape(1,4,3)\n\nx = Input(shape=(4,))\ne = Embedding(5, 5, mask_zero=True)(x)\nd = Dense(3)(e)\n\nm = Model(inputs=x, outputs=d)\nm.compile(loss='mse', optimizer='adam')\npreds = m.predict(data_in)\nloss = m.evaluate(data_in, data_out, verbose=0)\nprint(preds)\nprint('Computed Loss:', loss)\n\n[[[ 0.009682    0.02505393 -0.00632722]\n  [ 0.01756451  0.05928303  0.0153951 ]\n  [-0.00146054 -0.02064196 -0.04356086]\n  [-0.00146054 -0.02064196 -0.04356086]]]\nComputed Loss: 9.041069030761719\n\n# verify that only the first two outputs \n# have been considered in the computation of loss\nprint(np.square(preds[0,0:2] - data_out[0,0:2]).mean())\n\n9.041070036475277\n
\n", + "system": "" + }, + { + "instruction": "ResNet: 100% accuracy during training, but 33% prediction accuracy with the same data", + "input": "", + "output": "\n\n

It's because of the batch normalization layers.

\n\n

In training phase, the batch is normalized w.r.t. its mean and variance. However, in testing phase, the batch is normalized w.r.t. the moving average of previously observed mean and variance.

\n\n

Now this is a problem when the number of observed batches is small (e.g., 5 in your example) because in the BatchNormalization layer, by default moving_mean is initialized to be 0 and moving_variance is initialized to be 1.

\n\n

Given also that the default momentum is 0.99, you'll need to update the moving averages quite a lot of times before they converge to the \"real\" mean and variance.

\n\n

That's why the prediction is wrong in the early stage, but is correct after 1000 epochs.

\n\n
\n\n

You can verify it by forcing the BatchNormalization layers to operate in \"training mode\".

\n\n

During training, the accuracy is 1 and the loss is close to zero:

\n\n
model.fit(imgs,y,epochs=5,shuffle=True)\nEpoch 1/5\n3/3 [==============================] - 19s 6s/step - loss: 1.4624 - acc: 0.3333\nEpoch 2/5\n3/3 [==============================] - 0s 63ms/step - loss: 0.6051 - acc: 0.6667\nEpoch 3/5\n3/3 [==============================] - 0s 57ms/step - loss: 0.2168 - acc: 1.0000\nEpoch 4/5\n3/3 [==============================] - 0s 56ms/step - loss: 1.1921e-07 - acc: 1.0000\nEpoch 5/5\n3/3 [==============================] - 0s 53ms/step - loss: 1.1921e-07 - acc: 1.0000\n
\n\n

Now if we evaluate the model, we'll observe high loss and low accuracy because after 5 updates, the moving averages are still pretty close to the initial values:

\n\n
model.evaluate(imgs,y)\n3/3 [==============================] - 3s 890ms/step\n[10.745396614074707, 0.3333333432674408]\n
\n\n

However, if we manually specify the \"learning phase\" variable and let the BatchNormalization layers use the \"real\" batch mean and variance, the result becomes the same as what's observed in fit().

\n\n
sample_weights = np.ones(3)\nlearning_phase = 1  # 1 means \"training\"\nins = [imgs, y, sample_weights, learning_phase]\nmodel.test_function(ins)\n[1.192093e-07, 1.0]\n
\n\n
\n\n

It's also possible to verify it by changing the momentum to a smaller value.

\n\n

For example, by adding momentum=0.01 to all the batch norm layers in ResNet50, the prediction after 20 epochs is:

\n\n
model.predict(imgs)\narray([[  1.00000000e+00,   1.34882026e-08,   3.92139575e-22],\n       [  0.00000000e+00,   1.00000000e+00,   0.00000000e+00],\n       [  8.70998792e-06,   5.31159838e-10,   9.99991298e-01]], dtype=float32)\n
\n", + "system": "" + }, + { + "instruction": "TimeDistributed(Dense) vs Dense in Keras - Same number of parameters", + "input": "", + "output": "

TimeDistributedDense applies a same dense to every time step during GRU/LSTM Cell unrolling. So the error function will be between predicted label sequence and the actual label sequence. (Which is normally the requirement for sequence to sequence labeling problems).

\n

However, with return_sequences=False, Dense layer is applied only once at the last cell. This is normally the case when RNNs are used for classification problem. If return_sequences=True then Dense layer is applied to every timestep just like TimeDistributedDense.

\n

So for as per your models both are same, but if you change your second model to return_sequences=False, then Dense will be applied only at the last cell. Try changing it and the model will throw as error because then the Y will be of size [Batch_size, InputSize], it is no more a sequence to sequence but a full sequence to label problem.

\n
from keras.models import Sequential\nfrom keras.layers import Dense, Activation, TimeDistributed\nfrom keras.layers.recurrent import GRU\nimport numpy as np\n\nInputSize = 15\nMaxLen = 64\nHiddenSize = 16\n\nOutputSize = 8\nn_samples = 1000\n\nmodel1 = Sequential()\nmodel1.add(GRU(HiddenSize, return_sequences=True, input_shape=(MaxLen, InputSize)))\nmodel1.add(TimeDistributed(Dense(OutputSize)))\nmodel1.add(Activation('softmax'))\nmodel1.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n\n\nmodel2 = Sequential()\nmodel2.add(GRU(HiddenSize, return_sequences=True, input_shape=(MaxLen, InputSize)))\nmodel2.add(Dense(OutputSize))\nmodel2.add(Activation('softmax'))\nmodel2.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n\nmodel3 = Sequential()\nmodel3.add(GRU(HiddenSize, return_sequences=False, input_shape=(MaxLen, InputSize)))\nmodel3.add(Dense(OutputSize))\nmodel3.add(Activation('softmax'))\nmodel3.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n\nX = np.random.random([n_samples,MaxLen,InputSize])\nY1 = np.random.random([n_samples,MaxLen,OutputSize])\nY2 = np.random.random([n_samples, OutputSize])\n\nmodel1.fit(X, Y1, batch_size=128, nb_epoch=1)\nmodel2.fit(X, Y1, batch_size=128, nb_epoch=1)\nmodel3.fit(X, Y2, batch_size=128, nb_epoch=1)\n\nprint(model1.summary())\nprint(model2.summary())\nprint(model3.summary())\n
\n

In the above example architecture of model1 and model2 are sample (sequence to sequence models) and model3 is a full sequence to label model.

\n", + "system": "" + }, + { + "instruction": "AttributeError: 'module' object has no attribute 'computation'", + "input": "", + "output": "

Update dask to 0.15.0 will solve the issue

\n\n

update cmd: conda update dask

\n\n

input pip show dask will show follow message

\n\n
Name: dask\nVersion: 0.15.0\nSummary: Parallel PyData with Task Scheduling\nHome-page: http://github.com/dask/dask/\nAuthor: Matthew Rocklin\nAuthor-email: mrocklin@gmail.com\nLicense: BSD\nLocation: c:\\anaconda3\\lib\\site-packages\nRequires:\n
\n", + "system": "" + }, + { + "instruction": "Why does Keras LSTM batch size used for prediction have to be the same as fitting batch size?", + "input": "", + "output": "

Unfortunately what you want to do is impossible with Keras ... I've also struggle a lot of time on this problems and the only way is to dive into the rabbit hole and work with Tensorflow directly to do LSTM rolling prediction.

\n\n

First, to be clear on terminology, batch_size usually means number of sequences that are trained together, and num_steps means how many time steps are trained together. When you mean batch_size=1 and \"just predicting the next value\", I think you meant to predict with num_steps=1.

\n\n

Otherwise, it should be possible to train and predict with batch_size=50 meaning you are training on 50 sequences and make 50 predictions every time step, one for each sequence (meaning training/prediction num_steps=1).

\n\n

However, I think what you mean is that you want to use stateful LSTM to train with num_steps=50 and do prediction with num_steps=1. Theoretically this make senses and should be possible, and it is possible with Tensorflow, just not Keras.

\n\n

The problem: Keras requires an explicit batch size for stateful RNN. You must specify batch_input_shape (batch_size, num_steps, features).

\n\n

The reason: Keras must allocate a fixed-size hidden state vector in the computation graph with shape (batch_size, num_units) in order to persist the values between training batches. On the other hand, when stateful=False, the hidden state vector can be initialized dynamically with zeroes at the beginning of each batch so it does not need to be a fixed size. More details here: http://philipperemy.github.io/keras-stateful-lstm/

\n\n

Possible work around: Train and predict with num_steps=1. Example: https://github.com/keras-team/keras/blob/master/examples/lstm_stateful.py. This might or might not work at all for your problem as the gradient for back propagation will be computed on only one time step. See: https://github.com/fchollet/keras/issues/3669

\n\n

My solution: use Tensorflow: In Tensorflow you can train with batch_size=50, num_steps=100, then do predictions with batch_size=1, num_steps=1. This is possible by creating a different model graph for training and prediction sharing the same RNN weight matrices. See this example for next-character prediction: https://github.com/sherjilozair/char-rnn-tensorflow/blob/master/model.py#L11 and blog post http://karpathy.github.io/2015/05/21/rnn-effectiveness/. Note that one graph can still only work with one specified batch_size, but you can setup multiple model graphs sharing weights in Tensorflow.

\n", + "system": "" + }, + { + "instruction": "Keras: class weights (class_weight) for one-hot encoding", + "input": "", + "output": "

Here's a solution that's a bit shorter and faster. If your one-hot encoded y is a np.array:

\n\n
import numpy as np\nfrom sklearn.utils.class_weight import compute_class_weight\n\ny_integers = np.argmax(y, axis=1)\nclass_weights = compute_class_weight('balanced', np.unique(y_integers), y_integers)\nd_class_weights = dict(enumerate(class_weights))\n
\n\n

d_class_weights can then be passed to class_weight in .fit.

\n", + "system": "" + }, + { + "instruction": "How to delete a locally uploaded file on google colab?", + "input": "", + "output": "

Answer from @Korakot works for a single file and in case, to delete entire folder or subfolders or files

\n\n

use

\n\n

!rm -rf <folder_name>

\n", + "system": "" + }, + { + "instruction": "What's the difference between "samples_per_epoch" and "steps_per_epoch" in fit_generator", + "input": "", + "output": "

When you use fit_generator, the number of samples processed for each epoch is batch_size * steps_per_epochs. From the Keras documentation for fit_generator: https://keras.io/models/sequential/

\n\n
\n

steps_per_epoch: Total number of steps (batches of samples) to yield from generator before declaring one epoch finished and starting the next epoch. It should typically be equal to the number of unique samples of your dataset divided by the batch size.

\n
\n\n

This is different from the behaviour of 'fit', where increasing batch_size typically speeds up things.

\n\n

In conclusion, when you increase batch_size with fit_generator, you should decrease steps_per_epochs by the same factor, if you want training time to stay the same or lower.

\n", + "system": "" + }, + { + "instruction": "Add dropout layers between pretrained dense layers in keras", + "input": "", + "output": "

I found an answer myself by using Keras functional API

\n\n
from keras.applications import VGG16\nfrom keras.layers import Dropout\nfrom keras.models import Model\n\nmodel = VGG16(weights='imagenet')\n\n# Store the fully connected layers\nfc1 = model.layers[-3]\nfc2 = model.layers[-2]\npredictions = model.layers[-1]\n\n# Create the dropout layers\ndropout1 = Dropout(0.85)\ndropout2 = Dropout(0.85)\n\n# Reconnect the layers\nx = dropout1(fc1.output)\nx = fc2(x)\nx = dropout2(x)\npredictors = predictions(x)\n\n# Create a new model\nmodel2 = Model(input=model.input, output=predictors)\n
\n\n

model2 has the dropout layers as I wanted

\n\n
____________________________________________________________________________________________________\nLayer (type)                     Output Shape          Param #     Connected to                     \n====================================================================================================\ninput_1 (InputLayer)             (None, 3, 224, 224)   0                                            \n____________________________________________________________________________________________________\nblock1_conv1 (Convolution2D)     (None, 64, 224, 224)  1792        input_1[0][0]                    \n____________________________________________________________________________________________________\nblock1_conv2 (Convolution2D)     (None, 64, 224, 224)  36928       block1_conv1[0][0]               \n____________________________________________________________________________________________________\nblock1_pool (MaxPooling2D)       (None, 64, 112, 112)  0           block1_conv2[0][0]               \n____________________________________________________________________________________________________\nblock2_conv1 (Convolution2D)     (None, 128, 112, 112) 73856       block1_pool[0][0]                \n____________________________________________________________________________________________________\nblock2_conv2 (Convolution2D)     (None, 128, 112, 112) 147584      block2_conv1[0][0]               \n____________________________________________________________________________________________________\nblock2_pool (MaxPooling2D)       (None, 128, 56, 56)   0           block2_conv2[0][0]               \n____________________________________________________________________________________________________\nblock3_conv1 (Convolution2D)     (None, 256, 56, 56)   295168      block2_pool[0][0]                \n____________________________________________________________________________________________________\nblock3_conv2 (Convolution2D)     (None, 256, 56, 56)   590080      block3_conv1[0][0]               \n____________________________________________________________________________________________________\nblock3_conv3 (Convolution2D)     (None, 256, 56, 56)   590080      block3_conv2[0][0]               \n____________________________________________________________________________________________________\nblock3_pool (MaxPooling2D)       (None, 256, 28, 28)   0           block3_conv3[0][0]               \n____________________________________________________________________________________________________\nblock4_conv1 (Convolution2D)     (None, 512, 28, 28)   1180160     block3_pool[0][0]                \n____________________________________________________________________________________________________\nblock4_conv2 (Convolution2D)     (None, 512, 28, 28)   2359808     block4_conv1[0][0]               \n____________________________________________________________________________________________________\nblock4_conv3 (Convolution2D)     (None, 512, 28, 28)   2359808     block4_conv2[0][0]               \n____________________________________________________________________________________________________\nblock4_pool (MaxPooling2D)       (None, 512, 14, 14)   0           block4_conv3[0][0]               \n____________________________________________________________________________________________________\nblock5_conv1 (Convolution2D)     (None, 512, 14, 14)   2359808     block4_pool[0][0]                \n____________________________________________________________________________________________________\nblock5_conv2 (Convolution2D)     (None, 512, 14, 14)   2359808     block5_conv1[0][0]               \n____________________________________________________________________________________________________\nblock5_conv3 (Convolution2D)     (None, 512, 14, 14)   2359808     block5_conv2[0][0]               \n____________________________________________________________________________________________________\nblock5_pool (MaxPooling2D)       (None, 512, 7, 7)     0           block5_conv3[0][0]               \n____________________________________________________________________________________________________\nflatten (Flatten)                (None, 25088)         0           block5_pool[0][0]                \n____________________________________________________________________________________________________\nfc1 (Dense)                      (None, 4096)          102764544   flatten[0][0]                    \n____________________________________________________________________________________________________\ndropout_1 (Dropout)              (None, 4096)          0           fc1[0][0]                        \n____________________________________________________________________________________________________\nfc2 (Dense)                      (None, 4096)          16781312    dropout_1[0][0]                  \n____________________________________________________________________________________________________\ndropout_2 (Dropout)              (None, 4096)          0           fc2[1][0]                        \n____________________________________________________________________________________________________\npredictions (Dense)              (None, 1000)          4097000     dropout_2[0][0]                  \n====================================================================================================\nTotal params: 138,357,544\nTrainable params: 138,357,544\nNon-trainable params: 0\n____________________________________________________________________________________________________\n
\n", + "system": "" + }, + { + "instruction": "Error in "from keras.utils import to_categorical"", + "input": "", + "output": "

Keras is now fully intregrated into Tensorflow. So, importing only Keras causes error.

\n

It should be imported as:

\n
from tensorflow.keras.utils import to_categorical\n
\n
\n

Avoid importing as:

\n
from keras.utils import to_categorical\n
\n
\n

It is safe to use\nfrom tensorflow.keras. instead of from keras. while importing all the necessary modules.

\n
from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Conv2D, MaxPooling2D,Dropout\nfrom tensorflow.keras.layers import Dense, Activation, Flatten\nfrom tensorflow.keras.utils import to_categorical\nfrom tensorflow.keras import backend as K \nfrom sklearn.model_selection import train_test_split\nfrom tensorflow.keras import callbacks\n
\n", + "system": "" + }, + { + "instruction": "How to install h5py (needed for Keras) on MacOS with M1?", + "input": "", + "output": "

This works for me:

\n
$ brew install hdf5\n$ export HDF5_DIR="$(brew --prefix hdf5)"\n$ pip install --no-binary=h5py h5py\n
\n", + "system": "" + }, + { + "instruction": "RNN Regularization: Which Component to Regularize?", + "input": "", + "output": "

Regularizers that'll work best will depend on your specific architecture, data, and problem; as usual, there isn't a single cut to rule all, but there are do's and (especially) don't's, as well as systematic means of determining what'll work best - via careful introspection and evaluation.

\n\n
\n\n

How does RNN regularization work?

\n\n

Perhaps the best approach to understanding it is information-based. First, see \"How does 'learning' work?\" and \"RNN: Depth vs. Width\". To understand RNN regularization, one must understand how RNN handles information and learns, which the referred sections describe (though not exhaustively). Now to answer the question:

\n\n

RNN regularization's goal is any regularization's goal: maximizing information utility and traversal of the test loss function. The specific methods, however, tend to differ substantially for RNNs per their recurrent nature - and some work better than others; see below.

\n\n
\n\n

RNN regularization methods:

\n\n

WEIGHT DECAY

\n\n
    \n
  1. General: shrinks the norm ('average') of the weight matrix

    \n\n
  2. \n
  3. Recurrent weights: default activation='sigmoid'

    \n\n
  4. \n
  5. Kernel weights: for many-to-one (return_sequences=False), they work similar to weight decay on a typical layer (e.g. Dense). For many-to-many (=True), however, kernel weights operate on every timestep, so pros & cons similar to above will apply.

  6. \n
\n\n

Dropout:

\n\n\n\n

Batch Normalization:

\n\n\n\n

Weight Constraints: set hard upper-bound on weights l2-norm; possible alternative to weight decay.

\n\n

Activity Constraints: don't bother; for most purposes, if you have to manually constrain your outputs, the layer itself is probably learning poorly, and the solution is elsewhere.

\n\n
\n\n

What should I do? Lots of info - so here's some concrete advice:

\n\n
    \n
  1. Weight decay: try 1e-3, 1e-4, see which works better. Do not expect the same value of decay to work for kernel and recurrent_kernel, especially depending on architecture. Check weight shapes - if one is much smaller than the other, apply smaller decay to former

  2. \n
  3. Dropout: try 0.1. If you see improvement, try 0.2 - else, scrap it

  4. \n
  5. Recurrent Dropout: start with 0.2. Improvement --> 0.4. Improvement --> 0.5, else 0.3.

  6. \n
  7. Batch Normalization: try. Improvement --> keep it - else, scrap it.
  8. \n
  9. Recurrent Batchnorm: same as 4.
  10. \n
  11. Weight constraints: advisable w/ higher learning rates to prevent exploding gradients - else use higher weight decay
  12. \n
  13. Activity constraints: probably not (see above)
  14. \n
  15. Residual RNNs: introduce significant changes, along a regularizing effect. See application in IndRNNs
  16. \n
  17. Biases: weight decay and constraints become important upon attaining good backpropagation properties; without them on bias weights but with them on kernel (K) & recurrent kernel (RK) weights, bias weights may grow much faster than the latter two, and dominate the transformation - also leading to exploding gradients. I recommend weight decay / constraint less than or equal to that used on K & RK. Also, with BatchNormalization, you can cannot set use_bias=False as an \"equivalent\"; BN applies to outputs, not hidden-to-hidden transforms.
  18. \n
  19. Zoneout: don't know, never tried, might work - see paper.
  20. \n
  21. Layer Normalization: some report it working better than BN for RNNs - but my application found it otherwise; paper
  22. \n
  23. Data shuffling: is a strong regularizer. Also shuffle batch samples (samples in batch). See relevant info on stateful RNNs
  24. \n
  25. Optimizer: can be an inherent regularizer. Don't have a full explanation, but in my application, Nadam (& NadamW) has stomped every other optimizer - worth trying.
  26. \n
\n\n

Introspection: bottom section on 'learning' isn't worth much without this; don't just look at validation performance and call it a day - inspect the effect that adjusting a regularizer has on weights and activations. Evaluate using info toward bottom & relevant theory.

\n\n

BONUS: weight decay can be powerful - even more powerful when done right; turns out, adaptive optimizers like Adam can harm its effectiveness, as described in this paper. Solution: use AdamW. My Keras/TensorFlow implementation here.

\n\n
\n\n

This is too much! Agreed - welcome to Deep Learning. Two tips here:

\n\n
    \n
  1. Bayesian Optimization; will save you time especially on prohibitively expensive training.
  2. \n
  3. Conv1D(strides > 1), for many timesteps (>1000); slashes dimensionality, shouldn't harm performance (may in fact improve it).
  4. \n
\n\n
\n\n

Introspection Code:

\n\n

Gradients: see this answer

\n\n

Weights: see this answer

\n\n

Weight norm tracking: see this Q & A

\n\n

Activations: see this answer

\n\n

Weights: see_rnn.rnn_histogram or see_rnn.rnn_heatmap (examples in README)

\n\n
\n\n

How does 'learning' work?

\n\n

The 'ultimate truth' of machine learning that is seldom discussed or emphasized is, we don't have access to the function we're trying to optimize - the test loss function. All of our work is with what are approximations of the true loss surface - both the train set and the validation set. This has some critical implications:

\n\n
    \n
  1. Train set global optimum can lie very far from test set global optimum
  2. \n
  3. Local optima are unimportant, and irrelevant:\n\n
  4. \n
\n\n

Further, loss functions are way too complex to analyze directly; a better approach is to localize analysis to individual layers, their weight matrices, and roles relative to the entire NN. Two key considerations are:

\n\n
    \n
  1. Feature extraction capability. Ex: the driving mechanism of deep classifiers is, given input data, to increase class separability with each layer's transformation. Higher quality features will filter out irrelevant information, and deliver what's essential for the output layer (e.g. softmax) to learn a separating hyperplane.

  2. \n
  3. Information utility. Dead neurons, and extreme activations are major culprits of poor information utility; no single neuron should dominate information transfer, and too many neurons shouldn't lie purposeless. Stable activations and weight distributions enable gradient propagation and continued learning.

  4. \n
\n\n
\n\n

How does regularization work? read above first

\n\n

In a nutshell, via maximizing NN's information utility, and improving estimates of the test loss function. Each regularization method is unique, and no two exactly alike - see \"RNN regularizers\".

\n\n
\n\n

RNN: Depth vs. Width: not as simple as \"one is more nonlinear, other works in higher dimensions\".

\n\n\n\n
\n\n

Update:

\n\n

Here is an example of a near-ideal RNN gradient propagation for 170+ timesteps:

\n\n

\n\n

This is rare, and was achieved via careful regularization, normalization, and hyperparameter tuning. Usually we see a large gradient for the last few timesteps, which drops off sharply toward left - as here. Also, since the model is stateful and fits 7 equivalent windows, gradient effectively spans 1200 timesteps.

\n\n

Update 2: see 9 w/ new info & correction

\n\n

Update 3: add weight norms & weights introspection code

\n", + "system": "" + }, + { + "instruction": "Error when checking model input: expected lstm_1_input to have 3 dimensions, but got array with shape (339732, 29)", + "input": "", + "output": "

Setting timesteps = 1 (since, I want one timestep for each instance) and reshaping the X_train and X_test as:

\n\n
import numpy as np\nX_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))\nX_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))\n
\n\n

This worked!

\n", + "system": "" + }, + { + "instruction": "How to change Keras backend (where's the json file)?", + "input": "", + "output": "

After looking at keras sources (this place):

\n\n

Start up your python-binary and do the following

\n\n
import os\nprint(os.path.expanduser('~'))\n# >>> C:\\\\Users\\\\Sascha'  # will look different for different OS\n
\n\n\n", + "system": "" + }, + { + "instruction": "Compute class weight function issue in 'sklearn' library when used in 'Keras' classification (Python 3.8, only in VS code)", + "input": "", + "output": "

After spending a lot of time, this is how I fixed it. I still don't know why but when the code is modified as follows, it works fine. I got the idea after seeing this solution for a similar but slightly different issue.

\n
class_weights = compute_class_weight(\n                                        class_weight = "balanced",\n                                        classes = np.unique(train_classes),\n                                        y = train_classes                                                    \n                                    )\nclass_weights = dict(zip(np.unique(train_classes), class_weights))\nclass_weights\n
\n", + "system": "" + }, + { + "instruction": "record the computation time for each epoch in Keras during model.fit()", + "input": "", + "output": "

Try the following callback:

\n\n
class TimeHistory(keras.callbacks.Callback):\n    def on_train_begin(self, logs={}):\n        self.times = []\n\n    def on_epoch_begin(self, batch, logs={}):\n        self.epoch_time_start = time.time()\n\n    def on_epoch_end(self, batch, logs={}):\n        self.times.append(time.time() - self.epoch_time_start)\n
\n\n

Then:

\n\n
time_callback = TimeHistory()\nmodel.fit(..., callbacks=[..., time_callback],...)\ntimes = time_callback.times\n
\n\n

In this case times should store the epoch computation times.

\n", + "system": "" + }, + { + "instruction": "Error "Keras requires TensorFlow 2.2 or higher"", + "input": "", + "output": "

I had the same issue caused by last keras release,what i remember did():

\n\n

1-Upgrade tensorflow:

\n\n
  pip install --user --upgrade tensorflow-gpu\n
\n\n

(there might be some missing packages, just pip install them)

\n\n

2-Upgrade Tensorboard

\n\n
pip install --user --upgrade tensorboard\n
\n\n

(there might be some missing packages, just pip install them)

\n\n

3-Downgrade Keras

\n\n
pip install keras==2.3.1\n
\n\n

(latest version working for me)

\n\n

4-Downgrade tensorflow-gpu

\n\n
pip install --user --upgrade tensorflow-gpu==1.14.0\n
\n\n

(latest version working for me)

\n\n

Let me know if worked!

\n\n
\n\n

Anaconda 2020.02

\n\n

Python 3.7

\n\n

CPU i3 8100

\n\n

OS Windows 10 64

\n\n

Nvidia GPU GTX1050TI

\n\n

CUDA 10.1

\n", + "system": "" + }, + { + "instruction": "Early stopping with Keras and sklearn GridSearchCV cross-validation", + "input": "", + "output": "

[Answer after the question was edited & clarified:]

\n\n

Before rushing into implementation issues, it is always a good practice to take some time to think about the methodology and the task itself; arguably, intermingling early stopping with the cross validation procedure is not a good idea.

\n\n

Let's make up an example to highlight the argument.

\n\n

Suppose that you indeed use early stopping with 100 epochs, and 5-fold cross validation (CV) for hyperparameter selection. Suppose also that you end up with a hyperparameter set X giving best performance, say 89.3% binary classification accuracy.

\n\n

Now suppose that your second-best hyperparameter set, Y, gives 89.2% accuracy. Examining closely the individual CV folds, you see that, for your best case X, 3 out of the 5 CV folds exhausted the max 100 epochs, while in the other 2 early stopping kicked in, say in 95 and 93 epochs respectively.

\n\n

Now imagine that, examining your second-best set Y, you see that again 3 out of the 5 CV folds exhausted the 100 epochs, while the other 2 both stopped early enough at ~ 80 epochs.

\n\n

What would be your conclusion from such an experiment?

\n\n

Arguably, you would have found yourself in an inconclusive situation; further experiments might reveal which is actually the best hyperparameter set, provided of course that you would have thought to look into these details of the results in the first place. And needless to say, if all this was automated through a callback, you might have missed your best model despite the fact that you would have actually tried it.

\n\n
\n\n

The whole CV idea is implicitly based on the \"all other being equal\" argument (which of course is never true in practice, only approximated in the best possible way). If you feel that the number of epochs should be a hyperparameter, just include it explicitly in your CV as such, rather than inserting it through the back door of early stopping, thus possibly compromising the whole process (not to mention that early stopping has itself a hyperparameter, patience).

\n\n

Not intermingling these two techniques doesn't mean of course that you cannot use them sequentially: once you have obtained your best hyperparameters through CV, you can always employ early stopping when fitting the model in your whole training set (provided of course that you do have a separate validation set).

\n\n
\n\n

The field of deep neural nets is still (very) young, and it is true that it has yet to establish its \"best practice\" guidelines; add the fact that, thanks to an amazing community, there are all sort of tools available in open source implementations, and you can easily find yourself into the (admittedly tempting) position of mixing things up just because they happen to be available. I am not necessarily saying that this is what you are attempting to do here - I am just urging for more caution when combining ideas that may have not been designed to work along together...

\n", + "system": "" + }, + { + "instruction": "Keras flowFromDirectory get file names as they are being generated", + "input": "", + "output": "

Yes is it possible, at least with version 2.0.4 (don't know about earlier version).

\n\n

The instance of ImageDataGenerator().flow_from_directory(...) has an attribute with filenames which is a list of all the files in the order the generator yields them and also an attribute batch_index. So you can do it like this:

\n\n
datagen = ImageDataGenerator()\ngen = datagen.flow_from_directory(...)\n
\n\n

And every iteration on generator you can get the corresponding filenames like this:

\n\n
for i in gen:\n    idx = (gen.batch_index - 1) * gen.batch_size\n    print(gen.filenames[idx : idx + gen.batch_size])\n
\n\n

This will give you the filenames of the images in the current batch.

\n", + "system": "" + }, + { + "instruction": "How to use advanced activation layers in Keras?", + "input": "", + "output": "

The correct way to use the advanced activations like PReLU is to use it with add() method and not wrapping it using Activation class. Example:

\n\n
model = Sequential()\nact = keras.layers.advanced_activations.PReLU(init='zero', weights=None)\nmodel.add(Dense(64, input_dim=14, init='uniform'))\nmodel.add(act)\n
\n", + "system": "" + }, + { + "instruction": "How does binary cross entropy loss work on autoencoders?", + "input": "", + "output": "

In the context of autoencoders the input and output of the model is the same. So, if the input values are in the range [0,1] then it is acceptable to use sigmoid as the activation function of last layer. Otherwise, you need to use an appropriate activation function for the last layer (e.g. linear which is the default one).

\n\n

As for the loss function, it comes back to the values of input data again. If the input data are only between zeros and ones (and not the values between them), then binary_crossentropy is acceptable as the loss function. Otherwise, you need to use other loss functions such as 'mse' (i.e. mean squared error) or 'mae' (i.e. mean absolute error). Note that in the case of input values in range [0,1] you can use binary_crossentropy, as it is usually used (e.g. Keras autoencoder tutorial and this paper). However, don't expect that the loss value becomes zero since binary_crossentropy does not return zero when both prediction and label are not either zero or one (no matter they are equal or not). Here is a video from Hugo Larochelle where he explains the loss functions used in autoencoders (the part about using binary_crossentropy with inputs in range [0,1] starts at 5:30)

\n\n

Concretely, in your example, you are using the MNIST dataset. So by default the values of MNIST are integers in the range [0, 255]. Usually you need to normalize them first:

\n\n
trainX = trainX.astype('float32')\ntrainX /= 255.\n
\n\n

Now the values would be in range [0,1]. So sigmoid can be used as the activation function and either of binary_crossentropy or mse as the loss function.

\n\n
\n\n

Why binary_crossentropy can be used even when the true label values (i.e. ground-truth) are in the range [0,1]?

\n\n

Note that we are trying to minimize the loss function in training. So if the loss function we have used reaches its minimum value (which may not be necessarily equal to zero) when prediction is equal to true label, then it is an acceptable choice. Let's verify this is the case for binray cross-entropy which is defined as follows:

\n\n
bce_loss = -y*log(p) - (1-y)*log(1-p)\n
\n\n

where y is the true label and p is the predicted value. Let's consider y as fixed and see what value of p minimizes this function: we need to take the derivative with respect to p (I have assumed the log is the natural logarithm function for simplicity of calculations):

\n\n
bce_loss_derivative = -y*(1/p) - (1-y)*(-1/(1-p)) = 0 =>\n                      -y/p + (1-y)/(1-p) = 0 =>\n                      -y*(1-p) + (1-y)*p = 0 =>\n                      -y + y*p + p - y*p = 0 =>\n                       p - y = 0 => y = p\n
\n\n

As you can see binary cross-entropy have the minimum value when y=p, i.e. when the true label is equal to predicted label and this is exactly what we are looking for.

\n", + "system": "" + }, + { + "instruction": "OpenBLAS blas_thread_init: pthread_create: Resource temporarily unavailable", + "input": "", + "output": "

I had this problem running numpy on an ubuntu server. I got all of the following errors, depending on whether I tried to import numpy in a shell or running my django app:

\n\n\n\n

I'm posting this answer since it drove me crazy. What helped for me was to add:

\n\n
import os\nos.environ['OPENBLAS_NUM_THREADS'] = '1'\n
\n\n

before

\n\n
import numpy as np\n
\n\n

I guess the server had some limit for the amount of threads it allows(?). Hope it helps someone!

\n", + "system": "" + }, + { + "instruction": "How to get the output shape of a layer in Keras?", + "input": "", + "output": "

You can get the output shape of a layer by layer.output_shape.

\n\n
for layer in model.layers:\n    print(layer.output_shape)\n
\n\n

Gives you:

\n\n
(None, None, 64, 64, 40)\n(None, None, 64, 64, 40)\n(None, None, 64, 64, 40)\n(None, None, 64, 64, 40)\n(None, None, 64, 64, 40)\n(None, None, 64, 64, 40)\n(None, None, 64, 64, 40)\n(None, None, 64, 64, 40)\n(None, None, 64, 64, 1)\n
\n\n

Alternatively you can pretty print the model using model.summary:

\n\n
model.summary()\n
\n\n

Gives you the details about the number of parameters and output shapes of each layer and an overall model structure in a pretty format:

\n\n
_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\nconv_lst_m2d_1 (ConvLSTM2D)  (None, None, 64, 64, 40)  59200     \n_________________________________________________________________\nbatch_normalization_1 (Batch (None, None, 64, 64, 40)  160       \n_________________________________________________________________\nconv_lst_m2d_2 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    \n_________________________________________________________________\nbatch_normalization_2 (Batch (None, None, 64, 64, 40)  160       \n_________________________________________________________________\nconv_lst_m2d_3 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    \n_________________________________________________________________\nbatch_normalization_3 (Batch (None, None, 64, 64, 40)  160       \n_________________________________________________________________\nconv_lst_m2d_4 (ConvLSTM2D)  (None, None, 64, 64, 40)  115360    \n_________________________________________________________________\nbatch_normalization_4 (Batch (None, None, 64, 64, 40)  160       \n_________________________________________________________________\nconv3d_1 (Conv3D)            (None, None, 64, 64, 1)   1081      \n=================================================================\nTotal params: 407,001\nTrainable params: 406,681\nNon-trainable params: 320\n_________________________________________________________________\n
\n\n

If you want to access information about a specific layer only, you can use name argument when constructing that layer and then call like this:

\n\n
...\nmodel.add(ConvLSTM2D(..., name='conv3d_0'))\n...\n\nmodel.get_layer('conv3d_0')\n
\n\n
\n\n

EDIT: For reference sake it will always be same as layer.output_shape and please don't actually use Lambda or custom layers for this. But you can use Lambda layer to echo the shape of a passing tensor.

\n\n
...\ndef print_tensor_shape(x):\n    print(x.shape)\n    return x\nmodel.add(Lambda(print_tensor_shape))\n...\n
\n\n

Or write a custom layer and print the shape of the tensor on call().

\n\n
class echo_layer(Layer):\n...\n    def call(self, x):\n        print(x.shape)\n        return x\n...\n\nmodel.add(echo_layer())\n
\n", + "system": "" + }, + { + "instruction": "ImportError: cannot import name '_obtain_input_shape' from keras", + "input": "", + "output": "

You don't have to downgrade Keras 2.2.2.

\n\n

In Keras 2.2.2 there is no _obtain_input_shape method in the keras.applications.imagenet_utils module. You can find it under keras-applications with the modul name keras_applications (underscore).

\n\n

So you don't have to downgrade your Keras to 2.2.0 just change:

\n\n
from keras.applications.imagenet_utils import _obtain_input_shape\n
\n\n

to

\n\n
from keras_applications.imagenet_utils import _obtain_input_shape\n
\n", + "system": "" + }, + { + "instruction": "Why does my training loss have regular spikes?", + "input": "", + "output": "

I've figured it out myself:

\n\n

TL;DR:

\n\n

Make sure your loss magnitude is independent of your mini-batch size.

\n\n

The long explanation:

\n\n

In my case the issue was Keras-specific after all.

\n\n

Maybe the solution to this problem will be useful for someone at some point.

\n\n

It turns out that Keras divides the loss by the mini-batch size. The important thing to understand here is that it's not the loss function itself that averages over the batch size, but rather the averaging happens somewhere else in the training process.

\n\n

Why does this matter?

\n\n

The model I am training, SSD, uses a rather complicated multi-task loss function that does its own averaging (not by the batch size, but by the number of ground truth bounding boxes in the batch). Now if the loss function already divides the loss by some number that is correlated with the batch size, and afterwards Keras divides by the batch size a second time, then all of a sudden the magnitude of the loss value starts to depend on the batch size (to be precise, it becomes inversely proportional to the batch size).

\n\n

Now usually the number of samples in your dataset is not an integer multiple of the batch size you choose, so the very last mini-batch of an epoch (here I implicitly define an epoch as one full pass over the dataset) will end up containing fewer samples than the batch size. This is what messes up the magnitude of the loss if it depends on the batch size, and in turn messes up the magnitude of gradient. Since I'm using an optimizer with momentum, that messed up gradient continues influencing the gradients of a few subsequent training steps, too.

\n\n

Once I adjusted the loss function by multiplying the loss by the batch size (thus reverting Keras' subsequent division by the batch size), everything was fine: No more spikes in the loss.

\n", + "system": "" + }, + { + "instruction": "Extremely slow model load with keras", + "input": "", + "output": "

I solved the problem by clearing the keras session before each load

\n\n
from keras import backend as K\nfor i in range(...):\n  K.clear_session()\n  model = load_model(...)\n
\n", + "system": "" + }, + { + "instruction": "How the number of parameters associated with BatchNormalization layer is 2048?", + "input": "", + "output": "

These 2048 parameters are in fact [gamma weights, beta weights, moving_mean(non-trainable), moving_variance(non-trainable)], each having 512 elements (the size of the input layer).

\n", + "system": "" + }, + { + "instruction": "Implementing skip connections in keras", + "input": "", + "output": "

The easy answer is don't use a sequential model for this, use the functional API instead, implementing skip connections (also called residual connections) are then very easy, as shown in this example from the functional API guide:

\n\n
from keras.layers import merge, Convolution2D, Input\n\n# input tensor for a 3-channel 256x256 image\nx = Input(shape=(3, 256, 256))\n# 3x3 conv with 3 output channels (same as input channels)\ny = Convolution2D(3, 3, 3, border_mode='same')(x)\n# this returns x + y.\nz = merge([x, y], mode='sum')\n
\n", + "system": "" + }, + { + "instruction": "Keras-tuner Hyperband runing only 2 epochs", + "input": "", + "output": "

you can change the factor parameter to change that.\nBy default it is set to 3, but you can increase this number to get more than 2 epoch per trial

\n

see : docs

\n
\n

The Hyperband tuning algorithm uses adaptive resource allocation and early-stopping to quickly converge on a high-performing model. This is done using a sports championship style bracket. The algorithm trains a large number of models for a few epochs and carries forward only the top-performing half of models to the next round. Hyperband determines the number of models to train in a bracket by computing 1 + logfactor(max_epochs) and rounding it up to the nearest integer.

\n
\n", + "system": "" + }, + { + "instruction": "Why Bother With Recurrent Neural Networks For Structured Data?", + "input": "", + "output": "

In practice even in NLP you see that RNNs and CNNs are often competitive. Here's a 2017 review paper that shows this in more detail. In theory it might be the case that RNNs can handle the full complexity and sequential nature of language better but in practice the bigger obstacle is usually properly training the network and RNNs are finicky.

\n\n

Another problem that might have a chance of working would be to look at a problem like the balanced parenthesis problem (either with just parentheses in the strings or parentheses along with other distractor characters). This requires processing the inputs sequentially and tracking some state and might be easier to learn with a LSTM then a FFN.

\n\n

Update:\nSome data that looks sequential might not actually have to be treated sequentially. For example even if you provide a sequence of numbers to add since addition is commutative a FFN will do just as well as a RNN. This could also be true of many health problems where the dominating information is not of a sequential nature. Suppose every year a patient's smoking habits are measured. From a behavioral standpoint the trajectory is important but if you're predicting whether the patient will develop lung cancer the prediction will be dominated by just the number of years the patient smoked (maybe restricted to the last 10 years for the FFN).

\n\n

So you want to make the toy problem more complex and to require taking into account the ordering of the data. Maybe some kind of simulated time series, where you want to predict whether there was a spike in the data, but you don't care about absolute values just about the relative nature of the spike.

\n\n

Update2

\n\n

I modified your code to show a case where RNNs perform better. The trick was to use more complex conditional logic that is more naturally modeled in LSTMs than FFNs. The code is below. For 8 columns we see that the FFN trains in 1 minute and reaches a validation loss of 6.3. The LSTM takes 3x longer to train but it's final validation loss is 6x lower at 1.06.

\n\n

As we increase the number of columns the LSTM has a larger and larger advantage, especially if we added more complicated conditions in. For 16 columns the FFNs validation loss is 19 (and you can more clearly see the training curve as the model isn't able to instantly fit the data). In comparison the LSTM takes 11 times longer to train but has a validation loss of 0.31, 30 times smaller than the FFN! You can play around with even larger matrices to see how far this trend will extend.

\n\n
from keras import models\nfrom keras import layers\n\nfrom keras.layers import Dense, LSTM\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport time\n\nmatplotlib.use('Agg')\n\nnp.random.seed(20180908)\n\nrows = 20500\ncols = 10\n\n# Randomly generate Z\nZ = 100*np.random.uniform(0.05, 1.0, size = (rows, cols))\n\nlarger = np.max(Z[:, :cols/2], axis=1).reshape((rows, 1))\nlarger2 = np.max(Z[:, cols/2:], axis=1).reshape((rows, 1))\nsmaller = np.min((larger, larger2), axis=0)\n# Z is now the max of the first half of the array.\nZ = np.append(Z, larger, axis=1)\n# Z is now the min of the max of each half of the array.\n# Z = np.append(Z, smaller, axis=1)\n\n# Combine and shuffle.\n\n#Z = np.concatenate((Z_sum, Z_avg), axis = 0)\n\nnp.random.shuffle(Z)\n\n## Training and validation data.\n\nsplit = 10000\n\nX_train = Z[:split, :-1]\nX_valid = Z[split:, :-1]\nY_train = Z[:split, -1:].reshape(split, 1)\nY_valid = Z[split:, -1:].reshape(rows - split, 1)\n\nprint(X_train.shape)\nprint(Y_train.shape)\nprint(X_valid.shape)\nprint(Y_valid.shape)\n\nprint(\"Now setting up the FNN\")\n\n## FNN model.\n\ntick = time.time()\n\n# Define model.\n\nnetwork_fnn = models.Sequential()\nnetwork_fnn.add(layers.Dense(32, activation = 'relu', input_shape = (X_train.shape[1],)))\nnetwork_fnn.add(Dense(1, activation = None))\n\n# Compile model.\n\nnetwork_fnn.compile(optimizer = 'adam', loss = 'mean_squared_error')\n\n# Fit model.\n\nhistory_fnn = network_fnn.fit(X_train, Y_train, epochs = 500, batch_size = 128, verbose = False,\n    validation_data = (X_valid, Y_valid))\n\ntock = time.time()\n\nprint()\nprint(str('%.2f' % ((tock - tick) / 60)) + ' minutes.')\n\nprint(\"Now evaluating the FNN\")\n\nloss_fnn = history_fnn.history['loss']\nval_loss_fnn = history_fnn.history['val_loss']\nepochs_fnn = range(1, len(loss_fnn) + 1)\nprint(\"train loss: \", loss_fnn[-1])\nprint(\"validation loss: \", val_loss_fnn[-1])\n\nplt.plot(epochs_fnn, loss_fnn, 'black', label = 'Training Loss')\nplt.plot(epochs_fnn, val_loss_fnn, 'red', label = 'Validation Loss')\nplt.title('FNN: Training and Validation Loss')\nplt.legend()\nplt.show()\n\nplt.scatter(Y_train, network_fnn.predict(X_train), alpha = 0.1)\nplt.xlabel('Actual')\nplt.ylabel('Predicted')\nplt.title('training points')\nplt.show()\n\nplt.scatter(Y_valid, network_fnn.predict(X_valid), alpha = 0.1)\nplt.xlabel('Actual')\nplt.ylabel('Predicted')\nplt.title('valid points')\nplt.show()\n\nprint(\"LSTM\")\n\n## LSTM model.\n\nX_lstm_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1)\nX_lstm_valid = X_valid.reshape(X_valid.shape[0], X_valid.shape[1], 1)\n\ntick = time.time()\n\n# Define model.\n\nnetwork_lstm = models.Sequential()\nnetwork_lstm.add(layers.LSTM(32, activation = 'relu', input_shape = (X_lstm_train.shape[1], 1)))\nnetwork_lstm.add(layers.Dense(1, activation = None))\n\n# Compile model.\n\nnetwork_lstm.compile(optimizer = 'adam', loss = 'mean_squared_error')\n\n# Fit model.\n\nhistory_lstm = network_lstm.fit(X_lstm_train, Y_train, epochs = 500, batch_size = 128, verbose = False,\n    validation_data = (X_lstm_valid, Y_valid))\n\ntock = time.time()\n\nprint()\nprint(str('%.2f' % ((tock - tick) / 60)) + ' minutes.')\n\nprint(\"now eval\")\n\nloss_lstm = history_lstm.history['loss']\nval_loss_lstm = history_lstm.history['val_loss']\nepochs_lstm = range(1, len(loss_lstm) + 1)\nprint(\"train loss: \", loss_lstm[-1])\nprint(\"validation loss: \", val_loss_lstm[-1])\n\nplt.plot(epochs_lstm, loss_lstm, 'black', label = 'Training Loss')\nplt.plot(epochs_lstm, val_loss_lstm, 'red', label = 'Validation Loss')\nplt.title('LSTM: Training and Validation Loss')\nplt.legend()\nplt.show()\n\nplt.scatter(Y_train, network_lstm.predict(X_lstm_train), alpha = 0.1)\nplt.xlabel('Actual')\nplt.ylabel('Predicted')\nplt.title('training')\nplt.show()\n\nplt.scatter(Y_valid, network_lstm.predict(X_lstm_valid), alpha = 0.1)\nplt.xlabel('Actual')\nplt.ylabel('Predicted')\nplt.title(\"validation\")\nplt.show()\n
\n", + "system": "" + }, + { + "instruction": "How To Determine the 'filter' Parameter in the Keras Conv2D Function", + "input": "", + "output": "

Actually - there is no a good answer to your question. Most of the architectures are usually carefully designed and finetuned during many experiments. I could share with you some of the rules of thumbs one should apply when designing its own architecture:

\n\n
    \n
  1. Avoid a dimension collapse in the first layer. Let's assume that your input filter has a (n, n) spatial shape for RGB image. In this case, it is a good practice to set the filter numbers to be greater than n * n * 3 as this is the dimensionality of the input of a single filter. If you set smaller number - you could suffer from the fact that many useful pieces of information about the image are lost due to initialization which dropped informative dimensions. Of course - this is not a general rule - e.g. for a texture recognition, where image complexity is lower - a small number of filters might actually help.

  2. \n
  3. Think more about volume than filters number - when setting the number of filters it's important to think about the volume change instead of the change of filter numbers between the consecutive layers. E.g. in VGG - even though the number of filters doubles after pooling layer - the actual feature map volume is decreased by a factor of 2, because of pooling decreasing the feature map by a factor of 4. Usually decreasing the size of the volume by more than 3 should be considered as a bad practice. Most of the modern architectures use the volume drop factor in the range between 1 and 2. Still - this is not a general rule - e.g. in case of a narrow hierarchy - the greater value of volume drop might actually help.

  4. \n
  5. Avoid bottlenecking. As one may read in this milestone paper bottlenecking might seriously harm your training process. It occurs when dropping the volume is too severe. Of course - this still might be achieved - but then you should use the intelligent downsampling, used e.g. in Inception v>2

  6. \n
  7. Check 1x1 convolutions - it's believed that filters activation are highly correlated. One may take advantage of it by using 1x1 convolutions - namely convolution with a filter size of 1. This makes possible e.g. volume dropping by them instead of pooling or intelligent downsampling (see example here). You could e.g. build twice more filters and then cut 25% of them by using 1x1 convs as a consecutive layer.

  8. \n
\n\n

As you may see. There is no easy way to choose the number of filters. Except for the hints above, I'd like to share with you one of my favorite sanity checks on the number of filters. It takes 2 easy steps:

\n\n
    \n
  1. Try to overfit at 500 random images with regularization.
  2. \n
  3. Try to overfit at the whole dataset without any regularization.
  4. \n
\n\n

Usually - if the number of filters is too low (in general) - these two tests will show you that. If - during your training process - with regularization - your network severely overfits - this is a clear indicator that your network has way too many filters.

\n\n

Cheers.

\n", + "system": "" + }, + { + "instruction": "What's the difference between LSTM() and LSTMCell()?", + "input": "", + "output": "\n\n

A recurrent layer contains a cell object. The cell contains the core code for the calculations of each step, while the recurrent layer commands the cell and performs the actual recurrent calculations.

\n\n

Usually, people use LSTM layers in their code.
\nOr they use RNN layers containing LSTMCell.

\n\n

Both things are almost the same. An LSTM layer is a RNN layer using an LSTMCell, as you can check out in the source code.

\n\n

About the number of cells:

\n\n

Alghout it seems, because of its name, that LSTMCell is a single cell, it is actually an object that manages all the units/cells as we may think. In the same code mentioned, you can see that the units argument is used when creating an instance of LSTMCell.

\n", + "system": "" + }, + { + "instruction": "What is the difference between the predict and predict_on_batch methods of a Keras model?", + "input": "", + "output": "

The difference lies in when you pass as x data that is larger than one batch.

\n\n

predict will go through all the data, batch by batch, predicting labels.\nIt thus internally does the splitting in batches and feeding one batch at a time.

\n\n

predict_on_batch, on the other hand, assumes that the data you pass in is exactly one batch and thus feeds it to the network. It won't try to split it (which, depending on your setup, might prove problematic for your GPU memory if the array is very big)

\n", + "system": "" + }, + { + "instruction": "AttributeError: 'Model' object has no attribute 'predict_classes'", + "input": "", + "output": "

The predict_classes method is only available for the Sequential class (which is the class of your first model) but not for the Model class (the class of your second model).

\n\n

With the Model class, you can use the predict method which will give you a vector of probabilities and then get the argmax of this vector (with np.argmax(y_pred1,axis=1)).

\n", + "system": "" + }, + { + "instruction": "Difference between Dense and Activation layer in Keras", + "input": "", + "output": "

Using Dense(activation=softmax) is computationally equivalent to first add Dense and then add Activation(softmax). However there is one advantage of the second approach - you could retrieve the outputs of the last layer (before activation) out of such defined model. In the first approach - it's impossible.

\n", + "system": "" + }, + { + "instruction": "Where to find a documentation about default weight initializer in Keras?", + "input": "", + "output": "

Each layer has its own default value for initializing the weights. For most of the layers, such as Dense, convolution and RNN layers, the default kernel initializer is 'glorot_uniform' and the default bias intializer is 'zeros' (you can find this by going to the related section for each layer in the documentation; for example here is the Dense layer doc). You can find the definition of glorot_uniform initializer here in the Keras documentation.

\n

As for accessing the weights of each layer, it has already been answered here.

\n", + "system": "" + }, + { + "instruction": "How to standard scale a 3D matrix?", + "input": "", + "output": "

With only 3 line of code...

\n\n
scaler = StandardScaler()\nX_train = scaler.fit_transform(X_train.reshape(-1, X_train.shape[-1])).reshape(X_train.shape)\nX_test = scaler.transform(X_test.reshape(-1, X_test.shape[-1])).reshape(X_test.shape)\n
\n", + "system": "" + }, + { + "instruction": "Keras Tokenizer num_words doesn't seem to work", + "input": "", + "output": "

There is nothing wrong in what you are doing. word_index is computed the same way no matter how many most frequent words you will use later (as you may see here). So when you will call any transformative method - Tokenizer will use only three most common words and at the same time, it will keep the counter of all words - even when it's obvious that it will not use it later.

\n", + "system": "" + }, + { + "instruction": "How to Implement the Conv1DTranspose in keras?", + "input": "", + "output": "

Use keras backend to fit the input tensor to 2D transpose convolution. Do not always use transpose operation for it will consume a lot of time.

\n\n
import keras.backend as K\nfrom keras.layers import Conv2DTranspose, Lambda\n\n\ndef Conv1DTranspose(input_tensor, filters, kernel_size, strides=2, padding='same'):\n    \"\"\"\n        input_tensor: tensor, with the shape (batch_size, time_steps, dims)\n        filters: int, output dimension, i.e. the output tensor will have the shape of (batch_size, time_steps, filters)\n        kernel_size: int, size of the convolution kernel\n        strides: int, convolution step size\n        padding: 'same' | 'valid'\n    \"\"\"\n    x = Lambda(lambda x: K.expand_dims(x, axis=2))(input_tensor)\n    x = Conv2DTranspose(filters=filters, kernel_size=(kernel_size, 1), strides=(strides, 1), padding=padding)(x)\n    x = Lambda(lambda x: K.squeeze(x, axis=2))(x)\n    return x\n
\n", + "system": "" + }, + { + "instruction": "How to save Scikit-Learn-Keras Model into a Persistence File (pickle/hd5/json/yaml)", + "input": "", + "output": "

Edit 1 : Original answer about saving model

\n

With HDF5 :

\n
# saving model\njson_model = model_tt.model.to_json()\nopen('model_architecture.json', 'w').write(json_model)\n# saving weights\nmodel_tt.model.save_weights('model_weights.h5', overwrite=True)\n\n\n# loading model\nfrom keras.models import model_from_json\n\nmodel = model_from_json(open('model_architecture.json').read())\nmodel.load_weights('model_weights.h5')\n\n# dont forget to compile your model\nmodel.compile(loss='binary_crossentropy', optimizer='adam')\n
\n

Edit 2 : full code example with iris dataset

\n
# Train model and make predictions\nimport numpy\nimport pandas\nfrom keras.models import Sequential, model_from_json\nfrom keras.layers import Dense\nfrom keras.utils import np_utils\nfrom sklearn import datasets\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import LabelEncoder\n\n# fix random seed for reproducibility\nseed = 7\nnumpy.random.seed(seed)\n\n# load dataset\niris = datasets.load_iris()\nX, Y, labels = iris.data, iris.target, iris.target_names\nX = preprocessing.scale(X)\n\n# encode class values as integers\nencoder = LabelEncoder()\nencoder.fit(Y)\nencoded_Y = encoder.transform(Y)\n\n# convert integers to dummy variables (i.e. one hot encoded)\ny = np_utils.to_categorical(encoded_Y)\n\ndef build_model():\n    # create model\n    model = Sequential()\n    model.add(Dense(4, input_dim=4, init='normal', activation='relu'))\n    model.add(Dense(3, init='normal', activation='sigmoid'))\n    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n    return model\n\ndef save_model(model):\n    # saving model\n    json_model = model.to_json()\n    open('model_architecture.json', 'w').write(json_model)\n    # saving weights\n    model.save_weights('model_weights.h5', overwrite=True)\n\ndef load_model():\n    # loading model\n    model = model_from_json(open('model_architecture.json').read())\n    model.load_weights('model_weights.h5')\n    model.compile(loss='categorical_crossentropy', optimizer='adam')\n    return model\n\n\nX_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.3, random_state=seed)\n\n# build\nmodel = build_model()\nmodel.fit(X_train, Y_train, nb_epoch=200, batch_size=5, verbose=0)\n\n# save\nsave_model(model)\n\n# load\nmodel = load_model()\n\n# predictions\npredictions = model.predict_classes(X_test, verbose=0)\nprint(predictions)\n# reverse encoding\nfor pred in predictions:\n    print(labels[pred])\n
\n

Please note that I used Keras only, not the wrapper. It only add some complexity in something simple. Also code is voluntarily not refactored so you can have the whole picture.

\n

Also, you said you want to output 1 or 0. It is not possible in this dataset because you have 3 output dims and classes (Iris-setosa, Iris-versicolor, Iris-virginica). If you had only 2 classes then your output dim and classes would be 0 or 1 using sigmoid output fonction.

\n", + "system": "" + }, + { + "instruction": "What does initial_epoch in Keras mean?", + "input": "", + "output": "

Since in some of the optimizers, some of their internal values (e.g. learning rate) are set using the current epoch value, or even you may have (custom) callbacks that depend on the current value of epoch, the initial_epoch argument let you specify the initial value of epoch to start from when training.

\n\n

As stated in the documentation, this is mostly useful when you have trained your model for some epochs, say 10, and then saved it and now you want to load it and resume the training for another 10 epochs without disrupting the state of epoch-dependent objects (e.g. optimizer). So you would set initial_epoch=10 (i.e. we have trained the model for 10 epochs) and epochs=20 (not 10, since the total number of epochs to reach is 20) and then everything resume as if you were initially trained the model for 20 epochs in one single training session.

\n\n

However, note that when using built-in optimizers of Keras you don't need to use initial_epoch, since they store and update their state internally (without considering the value of current epoch) and also when saving a model the state of the optimizer will be stored as well.

\n", + "system": "" + }, + { + "instruction": "Accessing validation data within a custom callback", + "input": "", + "output": "

You can iterate directly over self.validation_data to aggregate all the validation data at the end of each epoch. If you want to calculate precision, recall and F1 across the complete validation dataset:

\n\n
# Validation metrics callback: validation precision, recall and F1\n# Some of the code was adapted from https://medium.com/@thongonary/how-to-compute-f1-score-for-each-epoch-in-keras-a1acd17715a2\nclass Metrics(callbacks.Callback):\n\n    def on_train_begin(self, logs={}):\n        self.val_f1s = []\n        self.val_recalls = []\n        self.val_precisions = []\n\n    def on_epoch_end(self, epoch, logs):\n        # 5.4.1 For each validation batch\n        for batch_index in range(0, len(self.validation_data)):\n            # 5.4.1.1 Get the batch target values\n            temp_targ = self.validation_data[batch_index][1]\n            # 5.4.1.2 Get the batch prediction values\n            temp_predict = (np.asarray(self.model.predict(\n                                self.validation_data[batch_index][0]))).round()\n            # 5.4.1.3 Append them to the corresponding output objects\n            if(batch_index == 0):\n                val_targ = temp_targ\n                val_predict = temp_predict\n            else:\n                val_targ = np.vstack((val_targ, temp_targ))\n                val_predict = np.vstack((val_predict, temp_predict))\n\n        val_f1 = round(f1_score(val_targ, val_predict), 4)\n        val_recall = round(recall_score(val_targ, val_predict), 4)\n        val_precis = round(precision_score(val_targ, val_predict), 4)\n\n        self.val_f1s.append(val_f1)\n        self.val_recalls.append(val_recall)\n        self.val_precisions.append(val_precis)\n\n        # Add custom metrics to the logs, so that we can use them with\n        # EarlyStop and csvLogger callbacks\n        logs[\"val_f1\"] = val_f1\n        logs[\"val_recall\"] = val_recall\n        logs[\"val_precis\"] = val_precis\n\n        print(\"\u2014 val_f1: {} \u2014 val_precis: {} \u2014 val_recall {}\".format(\n                 val_f1, val_precis, val_recall))\n        return\n\nvalid_metrics = Metrics()\n
\n\n

Then you can add valid_metrics to the callback argument:

\n\n
your_model.fit_generator(..., callbacks = [valid_metrics])\n
\n\n

Be sure to put it at the beginning of the callbacks in case you want other callbacks to use these measures.

\n", + "system": "" + }, + { + "instruction": "Defining model in keras (include_top = True)", + "input": "", + "output": "

Most of these models are a series of convolutional layers followed by one or a few dense (or fully connected) layers.

\n\n

Include_top lets you select if you want the final dense layers or not.

\n\n\n\n

About the weights:

\n\n\n\n

Because of this, removing the final dense layers allows you to define the input size (see in documentation). (And the output size will increase/decrease accordingly).

\n\n

But you lose the interpretation/classification layers. (You can add your own, depending on your task)

\n\n
\n\n

Extra info on Poolings and Flatten

\n\n

Global poolings:

\n\n

After the last convolutional layers, your outputs are still like images. They have shape (images, X, Y, channels), where X and Y are spatial dimensions of a 2D image.

\n\n

When your model has GlobalMaxPooling2D or GlobalAveragePooling2D, it will eliminate the spatial dimensions. With Max it will take only the highest value pixel for each channel. With Average it will take the mean value of each channel. The result will be just (images, channels), without spatial dimensions anymore.

\n\n\n\n

Flatten

\n\n

With flatten, the spatial dimensions will not be lost, but they will be transformed in features. From (images, X, Y, channels) to (images, X*Y*channels).

\n\n

This will require fixed input shapes, because X and Y must be defined, and if you add Dense layers after the flatten, the Dense layer will need a fixed number of features.

\n", + "system": "" + }, + { + "instruction": "Keras: model.evaluate vs model.predict accuracy difference in multi-class NLP task", + "input": "", + "output": "

I have found the problem. metrics=['accuracy'] calculates accuracy automatically from cost function. So using binary_crossentropy shows binary accuracy, not categorical accuracy. Using categorical_crossentropy automatically switches to categorical accuracy and now it is the same as calculated manually using model1.predict(). Yu-Yang was right to point out the cost function and activation function for multi-class problem.

\n\n

P.S: One can get both categorical and binary accuracy by using metrics=['binary_accuracy', 'categorical_accuracy']

\n", + "system": "" + }, + { + "instruction": "Keras: "RuntimeError: Failed to import pydot." after installing graphviz and pydot", + "input": "", + "output": "

The error message is a bit misleading, as you can see here. The problem is that graphviz is not installed.

\n\n

But you mention that graphviz was installed using pip. This is also misleading, since that graphviz package is just a python wrapper, and the graphviz binaries have to be installed separately for the python wrapper to work.

\n", + "system": "" + }, + { + "instruction": "keras: what is the difference between model.predict and model.predict_proba", + "input": "", + "output": "

predict

\n\n
predict(self, x, batch_size=32, verbose=0)\n
\n\n

Generates output predictions for the input samples, processing the samples in a batched way.

\n\n

Arguments

\n\n
x: the input data, as a Numpy array.\nbatch_size: integer.\nverbose: verbosity mode, 0 or 1.\n
\n\n

Returns

\n\n
A Numpy array of predictions.\n
\n\n

predict_proba

\n\n
predict_proba(self, x, batch_size=32, verbose=1)\n
\n\n

Generates class probability predictions for the input samples batch by batch.

\n\n

Arguments

\n\n
x: input data, as a Numpy array or list of Numpy arrays (if the model has multiple inputs).\nbatch_size: integer.\nverbose: verbosity mode, 0 or 1.\n
\n\n

Returns

\n\n
A Numpy array of probability predictions.\n
\n\n

Edit: In the recent version of keras, predict and predict_proba is same i.e. both give probabilities. To get the class labels use predict_classes. The documentation is not updated. (adapted from Avijit Dasgupta's comment)

\n", + "system": "" + }, + { + "instruction": "I am having trouble with this error (-215:Assertion failed) !ssize.empty() in function 'resize' in OpenCV", + "input": "", + "output": "

I was supposed to add a try/exception so my code could bypass "ugly" images:

\n
try:\n    path=os.path.join(mypath, n)\n    img=cv2.imread(path, cv2.IMREAD_GRAYSCALE)\n    img=cv2.resize(img, (img_rows, img_cols))\n\nexcept Exception as e:\n    print(str(e))\n
\n

cv2.resize() was where it was supposed to catch the error since it couldn't resize a "broken" image.

\n", + "system": "" + }, + { + "instruction": "What is the difference between the terms accuracy and validation accuracy", + "input": "", + "output": "

When training a machine learning model, one of the main things that you want to avoid would be overfitting. This is when your model fits the training data well, but it isn't able to generalize and make accurate predictions for data it hasn't seen before.

\n

To find out if their model is overfitting, data scientists use a technique called cross-validation, where they split their data into two parts - the training set, and the validation set. The training set is used to train the model, while the validation set is only used to evaluate the model's performance.

\n

Metrics on the training set let you see how your model is progressing in terms of its training, but it's metrics on the validation set that let you get a measure of the quality of your model - how well it's able to make new predictions based on data it hasn't seen before.

\n

With this in mind, loss and acc are measures of loss and accuracy on the training set, while val_loss and val_acc are measures of loss and accuracy on the validation set.

\n

At the moment your model has an accuracy of ~86% on the training set and ~84% on the validation set. This means that you can expect your model to perform with ~84% accuracy on new data.

\n

I notice that as your epochs goes from 23 to 25, your acc metric increases, while your val_acc metric decreases. This means that your model is fitting the training set better, but is losing its ability to predict on new data, indicating that your model is starting to fit on noise and is beginning to overfit.

\n

So that is a quick explanation on validation metrics and how to interpret them.

\n", + "system": "" + }, + { + "instruction": "How to setup 1D-Convolution and LSTM in Keras", + "input": "", + "output": "

If you want to predict one value for each timestep, two slightly different solutions come to my mind:

\n\n

1) Remove the MaxPooling1D layer, add the padding='same' argument to Conv1D layer and add return_sequence=True argument to LSTM so that the LSTM returns the output of each timestep:

\n\n
from keras.layers import Input, Dense, LSTM, MaxPooling1D, Conv1D\nfrom keras.models import Model\n\ninput_layer = Input(shape=(400, 16))\nconv1 = Conv1D(filters=32,\n               kernel_size=8,\n               strides=1,\n               activation='relu',\n               padding='same')(input_layer)\nlstm1 = LSTM(32, return_sequences=True)(conv1)\noutput_layer = Dense(1, activation='sigmoid')(lstm1)\nmodel = Model(inputs=input_layer, outputs=output_layer)\n\nmodel.summary()\n
\n\n

The model summary would be:

\n\n
Layer (type)                 Output Shape              Param #   \n=================================================================\ninput_4 (InputLayer)         (None, 400, 16)           0         \n_________________________________________________________________\nconv1d_4 (Conv1D)            (None, 400, 32)           4128      \n_________________________________________________________________\nlstm_4 (LSTM)                (None, 400, 32)           8320      \n_________________________________________________________________\ndense_4 (Dense)              (None, 400, 1)            33        \n=================================================================\nTotal params: 12,481\nTrainable params: 12,481\nNon-trainable params: 0\n_________________________________________________________________\n
\n\n

2) Just change the number of units in the Dense layer to 400 and reshape y to (n_samples, n_timesteps):

\n\n
from keras.layers import Input, Dense, LSTM, MaxPooling1D, Conv1D\nfrom keras.models import Model\n\ninput_layer = Input(shape=(400, 16))\nconv1 = Conv1D(filters=32,\n               kernel_size=8,\n               strides=1,\n               activation='relu')(input_layer)\npool1 = MaxPooling1D(pool_size=4)(conv1)\nlstm1 = LSTM(32)(pool1)\noutput_layer = Dense(400, activation='sigmoid')(lstm1)\nmodel = Model(inputs=input_layer, outputs=output_layer)\n\nmodel.summary()\n
\n\n

The model summary would be:

\n\n
_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\ninput_6 (InputLayer)         (None, 400, 16)           0         \n_________________________________________________________________\nconv1d_6 (Conv1D)            (None, 393, 32)           4128      \n_________________________________________________________________\nmax_pooling1d_5 (MaxPooling1 (None, 98, 32)            0         \n_________________________________________________________________\nlstm_6 (LSTM)                (None, 32)                8320      \n_________________________________________________________________\ndense_6 (Dense)              (None, 400)               13200     \n=================================================================\nTotal params: 25,648\nTrainable params: 25,648\nNon-trainable params: 0\n_________________________________________________________________\n
\n\n

Don't forget that in both cases you must use 'binary_crossentropy' (not 'categorical_crossentropy') as the loss function. I expect this solution to have a lower accuracy than the solution #1; but you must experiment with both and try to change the parameters since it entirely depends on the specific problem you are trying to solve and the nature of the data you have.

\n\n
\n\n

Update:

\n\n

You asked for a convolution layer that only covers one timestep and k adjacent features. Yes, you can do it using a Conv2D layer:

\n\n
# first add an axis to your data\nX = np.expand_dims(X)   # now X has a shape of (n_samples, n_timesteps, n_feats, 1)\n\n# adjust input layer shape ...\nconv2 = Conv2D(n_filters, (1, k), ...)   # covers one timestep and k features\n# adjust other layers according to the output of convolution layer...\n
\n\n

Although I have no idea why you are doing this, to use the output of the convolution layer (which is (?, n_timesteps, n_features, n_filters), one solution is to use a LSTM layer which is wrapped inside a TimeDistributed layer. Or another solution is to flatten the last two axis.

\n", + "system": "" + }, + { + "instruction": "How to use fit_generator with multiple inputs", + "input": "", + "output": "

Try this generator:

\n\n
def generator_two_img(X1, X2, y, batch_size):\n    genX1 = gen.flow(X1, y,  batch_size=batch_size, seed=1)\n    genX2 = gen.flow(X2, y, batch_size=batch_size, seed=1)\n    while True:\n        X1i = genX1.next()\n        X2i = genX2.next()\n        yield [X1i[0], X2i[0]], X1i[1]\n
\n

Generator for 3 inputs:

\n
def generator_three_img(X1, X2, X3, y, batch_size):\n    genX1 = gen.flow(X1, y,  batch_size=batch_size, seed=1)\n    genX2 = gen.flow(X2, y, batch_size=batch_size, seed=1)\n    genX3 = gen.flow(X3, y, batch_size=batch_size, seed=1)\n    while True:\n        X1i = genX1.next()\n        X2i = genX2.next()\n        X3i = genX3.next()\n        yield [X1i[0], X2i[0], X3i[0]], X1i[1]\n
\n

EDIT (add generator, output image and numpy array, and target)

\n
#X1 is an image, y is the target, X2 is a numpy array - other data input        \ndef gen_flow_for_two_inputs(X1, X2, y):\n    genX1 = gen.flow(X1,y,  batch_size=batch_size,seed=666)\n    genX2 = gen.flow(X1,X2, batch_size=batch_size,seed=666)\n    while True:\n        X1i = genX1.next()\n        X2i = genX2.next()\n        #Assert arrasy are equal - this was for peace of mind, but slows down training\n        #np.testing.assert_array_equal(X1i[0],X2i[0])\n        yield [X1i[0], X2i[1]], X1i[1]\n
\n", + "system": "" + }, + { + "instruction": "How to output per-class accuracy in Keras\uff1f", + "input": "", + "output": "

Precision & recall are more useful measures for multi-class classification (see definitions). Following the Keras MNIST CNN example (10-class classification), you can get the per-class measures using classification_report from sklearn.metrics:

\n\n\n\n
from sklearn.metrics import classification_report\nimport numpy as np\n\nY_test = np.argmax(y_test, axis=1) # Convert one-hot to index\ny_pred = model.predict_classes(x_test)\nprint(classification_report(Y_test, y_pred))\n
\n\n

Here is the result:

\n\n
         precision    recall  f1-score   support\n\n      0       0.99      1.00      1.00       980\n      1       0.99      0.99      0.99      1135\n      2       1.00      0.99      0.99      1032\n      3       0.99      0.99      0.99      1010\n      4       0.98      1.00      0.99       982\n      5       0.99      0.99      0.99       892\n      6       1.00      0.99      0.99       958\n      7       0.97      1.00      0.99      1028\n      8       0.99      0.99      0.99       974\n      9       0.99      0.98      0.99      1009\n\navg / total   0.99      0.99      0.99     10000\n
\n", + "system": "" + }, + { + "instruction": "How to train the network only on one output when there are multiple outputs?", + "input": "", + "output": "
\n

I only want to train the network on output y2.

\n
\n\n

Based on Keras functional API guide you can achieve that with

\n\n
model1 = Model(input=x, output=[y2,y3])   \nmodel1.compile(optimizer='sgd', loss=custom_loss_function,\n                  loss_weights=[1., 0.0])\n
\n\n
\n

What is the shape/structure of the y_pred and y_true argument in loss\n function when multiple outputs are used? Can I access them as above?\n Is it y_pred[0] or y_pred[:,0]

\n
\n\n

In keras multi-output models loss function is applied for each output separately. In pseudo-code:

\n\n
loss = sum( [ loss_function( output_true, output_pred ) for ( output_true, output_pred ) in zip( outputs_data, outputs_model ) ] )\n
\n\n

The functionality to do loss function on multiple outputs seems unavailable to me. One probably could achieve that by incorporating the loss function as a layer of the network.

\n", + "system": "" + }, + { + "instruction": "Keras error : Expected to see 1 array", + "input": "", + "output": "

Your error comes from the fact that your X for some reason wasn't transformed to a numpy.array. In this your X is treated as a list of rows and this is a reason behind your error message (that it expected one input instead of list which has a number of rows elements). Transformation:

\n\n
X = numpy.array(X)\nY = numpy.array(Y)\n
\n\n

I would check a data loading process because something might go wrong there.

\n\n

UPDATE:

\n\n

As it was mentioned in a comment - input_shape need to be changed to input_dim.

\n\n

UPDATE 2:

\n\n

In order to keep input_shape one should change to it to input_shape=(200,).

\n", + "system": "" + }, + { + "instruction": "TensorFlow/Keras multi-threaded model fitting", + "input": "", + "output": "

Tensorflow Graphs are not threadsafe (see https://www.tensorflow.org/api_docs/python/tf/Graph) and when you create a new Tensorflow Session, it by default uses the default graph.

\n\n

You can get around this by creating a new session with a new graph in your parallelized function and constructing your keras model there.

\n\n

Here is some code that creates and fits a model on each available gpu in parallel:

\n\n
import concurrent.futures\nimport numpy as np\n\nimport keras.backend as K\nfrom keras.layers import Dense\nfrom keras.models import Sequential\n\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib\n\ndef get_available_gpus():\n    local_device_protos = device_lib.list_local_devices()\n    return [x.name for x in local_device_protos if x.device_type == 'GPU']\n\nxdata = np.random.randn(100, 8)\nytrue = np.random.randint(0, 2, 100)\n\ndef fit(gpu):\n    with tf.Session(graph=tf.Graph()) as sess:\n        K.set_session(sess)\n        with tf.device(gpu):\n            model = Sequential()\n            model.add(Dense(12, input_dim=8, activation='relu'))\n            model.add(Dense(8, activation='relu'))\n            model.add(Dense(1, activation='sigmoid'))\n\n            model.compile(loss='binary_crossentropy', optimizer='adam')\n            model.fit(xdata, ytrue, verbose=0)\n\n            return model.evaluate(xdata, ytrue, verbose=0)\n\ngpus = get_available_gpus()\nwith concurrent.futures.ThreadPoolExecutor(len(gpus)) as executor:\n    results = [x for x in executor.map(fit, gpus)]\nprint('results: ', results)\n
\n", + "system": "" + }, + { + "instruction": "TypeError: Unable to convert function return value to a Python type! The signature was () -> handle", + "input": "", + "output": "

Running pip3 install numpy --upgrade solved this issue for me.

\n", + "system": "" + }, + { + "instruction": "Which loss function and metrics to use for multi-label classification with very high ratio of negatives to positives?", + "input": "", + "output": "

Categorical Cross-Entropy loss or Softmax Loss is a Softmax activation plus a Cross-Entropy loss. If we use this loss, we will train a CNN to output a probability over the C classes for each image. It is used for multi-class classification.

\n

What you want is multi-label classification, so you will use Binary Cross-Entropy Loss or Sigmoid Cross-Entropy loss. It is a Sigmoid activation plus a Cross-Entropy loss. Unlike Softmax loss it is independent for each vector component (class), meaning that the loss computed for every CNN output vector component is not affected by other component values. That\u2019s why it is used for multi-label classification, where the insight of an element belonging to a certain class should not influence the decision for another class.

\n

Now for handling class imbalance, you can use weighted Sigmoid Cross-Entropy loss. So you will penalize for wrong prediction based on the number/ratio of positive examples.

\n", + "system": "" + }, + { + "instruction": "Save model every 10 epochs tensorflow.keras v2", + "input": "", + "output": "

Using tf.keras.callbacks.ModelCheckpoint use save_freq='epoch' and pass an extra argument period=10.

\n\n

Although this is not documented in the official docs, that is the way to do it (notice it is documented that you can pass period, just doesn't explain what it does).

\n", + "system": "" + }, + { + "instruction": "Can't import tensorflow.keras in VS Code", + "input": "", + "output": "

The imports that were causing the issue for me:

\n
from tensorflow.keras.models import Model\nfrom tensorflow.keras.layers import Dense\n
\n

The way I resolved it:

\n
from tensorflow import keras\nfrom keras.models import Model\nfrom keras.layers import Dense\n
\n", + "system": "" + }, + { + "instruction": "The added layer must be an instance of class Layer. Found: <tensorflow.python.keras.engine.input_layer.InputLayer>", + "input": "", + "output": "

This won't work because a tensorflow.keras layer is getting added to a keras Model.

\n\n
vgg_model = tensorflow.keras.applications.vgg16.VGG16()\nmodel = keras.Sequential()\nmodel.add(vgg_model.layers[0])\n
\n\n

Instantiate tensorflow.keras.Sequential(). This will work.

\n\n
model = tensorflow.keras.Sequential()\nmodel.add(vgg_model.layers[0])\n
\n", + "system": "" + }, + { + "instruction": "Keras Sequential model with multiple inputs", + "input": "", + "output": "

To solve this problem you have two options.

\n\n

1. Using a sequential model

\n\n

You can concatenate both arrays into one before feeding to the network. Let's assume the two arrays have a shape of (Number_data_points, ), now the arrays can be merged using numpy.stack method.

\n\n
merged_array = np.stack([array_1, array_2], axis=1)\n\n
\n\n
model0 = keras.Sequential([\nkeras.layers.Dense(2, input_dim=2, activation=keras.activations.sigmoid, use_bias=True),\nkeras.layers.Dense(1, activation=keras.activations.relu, use_bias=True),\n])\n\nmodel0.fit(merged_array,output, batch_size=16, epochs=100)\n\n
\n\n

2. Using Functional API.

\n\n

This is the most recommened way to use when there are multiple inputs to the model.

\n\n
input1 = keras.layers.Input(shape=(1, ))\ninput2 = keras.layers.Input(shape=(1,))\nmerged = keras.layers.Concatenate(axis=1)([input1, input2])\ndense1 = keras.layers.Dense(2, input_dim=2, activation=keras.activations.sigmoid, use_bias=True)(merged)\noutput = keras.layers.Dense(1, activation=keras.activations.relu, use_bias=True)(dense1)\nmodel10 = keras.models.Model(inputs=[input1, input2], output=output)\n
\n\n

Now you can use the second method you have trying to fit to the model

\n\n
model0.fit([array_1, array_2],output, batch_size=16, epochs=100)\n\n
\n", + "system": "" + }, + { + "instruction": "Tensor is not an element of this graph; deploying Keras model", + "input": "", + "output": "

Flask uses multiple threads. The problem you are running into is because the tensorflow model is not loaded and used in the same thread. One workaround is to force tensorflow to use the gloabl default graph .

\n\n

Add this after you load your model

\n\n
global graph\ngraph = tf.get_default_graph() \n
\n\n

And inside your predict

\n\n
with graph.as_default():\n    y_hat = keras_model_loaded.predict(predict_request, batch_size=1, verbose=1)\n
\n", + "system": "" + }, + { + "instruction": "What function defines accuracy in Keras when the loss is mean squared error (MSE)?", + "input": "", + "output": "

There are at least two separate issues with your question.

\n

The first one should be clear by now from the comments by Dr. Snoopy and the other answer: accuracy is meaningless in a regression problem, such as yours; see also the comment by patyork in this Keras thread. For good or bad, the fact is that Keras will not "protect" you or any other user from putting not-meaningful requests in your code, i.e. you will not get any error, or even a warning, that you are attempting something that does not make sense, such as requesting the accuracy in a regression setting.

\n

Having clarified that, the other issue is:

\n

Since Keras does indeed return an "accuracy", even in a regression setting, what exactly is it and how is it calculated?

\n

To shed some light here, let's revert to a public dataset (since you do not provide any details about your data), namely the Boston house price dataset (saved locally as housing.csv), and run a simple experiment as follows:

\n\n
import numpy as np\nimport pandas\nimport keras\n\nfrom keras.models import Sequential\nfrom keras.layers import Dense\n\n# load dataset\ndataframe = pandas.read_csv("housing.csv", delim_whitespace=True, header=None)\ndataset = dataframe.values\n# split into input (X) and output (Y) variables\nX = dataset[:,0:13]\nY = dataset[:,13]\n\nmodel = Sequential()\nmodel.add(Dense(13, input_dim=13, kernel_initializer='normal', activation='relu'))\nmodel.add(Dense(1, kernel_initializer='normal'))\n# Compile model asking for accuracy, too:\nmodel.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])\n\nmodel.fit(X, Y,\n     batch_size=5,\n     epochs=100,\n     verbose=1)\n
\n

As in your case, the model fitting history (not shown here) shows a decreasing loss, and an accuracy roughly increasing. Let's evaluate now the model performance in the same training set, using the appropriate Keras built-in function:

\n
score = model.evaluate(X, Y, verbose=0)\nscore\n# [16.863721372581754, 0.013833992168483997]\n
\n

The exact contents of the score array depend on what exactly we have requested during model compilation; in our case here, the first element is the loss (MSE), and the second one is the "accuracy".

\n

At this point, let us have a look at the definition of Keras binary_accuracy in the metrics.py file:

\n
def binary_accuracy(y_true, y_pred):\n    return K.mean(K.equal(y_true, K.round(y_pred)), axis=-1)\n
\n

So, after Keras has generated the predictions y_pred, it first rounds them, and then checks to see how many of them are equal to the true labels y_true, before getting the mean.

\n

Let's replicate this operation using plain Python & Numpy code in our case, where the true labels are Y:

\n
y_pred = model.predict(X)\nl = len(Y)\nacc = sum([np.round(y_pred[i])==Y[i] for i in range(l)])/l\nacc\n# array([0.01383399])\n
\n

Well, bingo! This is actually the same value returned by score[1] above...

\n

To make a long story short: since you (erroneously) request metrics=['accuracy'] in your model compilation, Keras will do its best to satisfy you, and will return some "accuracy" indeed, calculated as shown above, despite this being completely meaningless in your setting.

\n
\n

There are quite a few settings where Keras, under the hood, performs rather meaningless operations without giving any hint or warning to the user; two of them I have happened to encounter are:

\n\n", + "system": "" + }, + { + "instruction": "Why should we normalize data for deep learning in Keras?", + "input": "", + "output": "

Normalization is a generic concept not limited only to deep learning or to Keras.

\n\n

Why to normalize?

\n\n

Let me take a simple logistic regression example which will be easy to understand and to explain normalization. \nAssume we are trying to predict if a customer should be given loan or not. Among many available independent variables lets just consider Age and Income. \nLet the equation be of the form:

\n\n
Y = weight_1 * (Age) + weight_2 * (Income) + some_constant\n
\n\n

Just for sake of explanation let Age be usually in range of [0,120] and let us assume Income in range of [10000, 100000]. The scale of Age and Income are very different. If you consider them as is then weights weight_1 and weight_2 may be assigned biased weights. weight_2 might bring more importance to Income as a feature than to what weight_1 brings importance to Age. To scale them to a common level, we can normalize them. For example, we can bring all the ages in range of [0,1] and all incomes in range of [0,1]. Now we can say that Age and Income are given equal importance as a feature.

\n\n

Does Normalization always increase the accuracy?

\n\n

Apparently, No. It is not necessary that normalization always increases accuracy. It may or might not, you never really know until you implement. Again it depends on at which stage in you training you apply normalization, on whether you apply normalization after every activation, etc.

\n\n

As the range of the values of the features gets narrowed down to a particular range because of normalization, its easy to perform computations over a smaller range of values. So, usually the model gets trained a bit faster.

\n\n

Regarding the number of epochs, accuracy usually increases with number of epochs provided that your model doesn't start over-fitting.

\n\n
\n\n

A very good explanation for Normalization/Standardization and related terms is here.

\n", + "system": "" + }, + { + "instruction": "What is the difference between performing upsampling together with strided transpose convolution and transpose convolution with stride 1 only?", + "input": "", + "output": "

Here and here you can find a really nice explanation of how transposed convolutions work. To sum up both of these approaches:

\n\n
    \n
  1. In your first approach, you are first upsampling your feature map:

    \n\n
    [[1, 2], [3, 4]] -> [[1, 1, 2, 2], [1, 1, 2, 2], [3, 3, 4, 4], [3, 3, 4, 4]]\n
    \n\n

    and then you apply a classical convolution (as Conv2DTranspose with stride=1 and padding='same' is equivalent to Conv2D).

  2. \n
  3. In your second approach you are first un(max)pooling your feature map:

    \n\n
    [[1, 2], [3, 4]] -> [[1, 0, 2, 0], [0, 0, 0, 0], [3, 0, 4, 0], [0, 0, 0, 0]]\n
    \n\n

    and then apply a classical convolution with filter_size, filters`, etc.

    \n\n

    \"enter

  4. \n
\n\n

Fun fact is that - although these approaches are different they share something in common. Transpose convolution is meant to be the approximation of gradient of convolution, so the first approach is approximating sum pooling whereas second max pooling gradient. This makes the first results to produce slightly smoother results.

\n\n

Other reasons why you might see the first approach are:

\n\n\n", + "system": "" + }, + { + "instruction": "How can I assign a class_weight in Keras in a simple way?", + "input": "", + "output": "

The class_weight parameter of the fit() function is a dictionary mapping classes to a weight value.

\n\n

Lets say you have 500 samples of class 0 and 1500 samples of class 1 than you feed in class_weight = {0:3 , 1:1}. That gives class 0 three times the weight of class 1.

\n\n

train_generator.classes gives you the proper class names for your weighting.

\n\n

If you want to calculate this programmatically you can use scikit-learn\u00b4s sklearn.utils.compute_class_weight().

\n\n

The function looks at the distribution of labels and produces weights to equally penalize under or over-represented classes in the training set.

\n\n

See also this useful thread here: https://github.com/fchollet/keras/issues/1875

\n\n

And this thread might also be of help: Is it possible to automatically infer the class_weight from flow_from_directory in Keras?

\n", + "system": "" + }, + { + "instruction": "How can I convert a trained Tensorflow model to Keras?", + "input": "", + "output": "

I think the callback in keras is also a solution.

\n\n

The ckpt file can be saved by TF with:

\n\n
saver = tf.train.Saver()\nsaver.save(sess, checkpoint_name)\n
\n\n

and to load checkpoint in Keras, you need a callback class as follow:

\n\n
class RestoreCkptCallback(keras.callbacks.Callback):\n    def __init__(self, pretrained_file):\n        self.pretrained_file = pretrained_file\n        self.sess = keras.backend.get_session()\n        self.saver = tf.train.Saver()\n    def on_train_begin(self, logs=None):\n        if self.pretrian_model_path:\n            self.saver.restore(self.sess, self.pretrian_model_path)\n            print('load weights: OK.')\n
\n\n

Then in your keras script:

\n\n
 model.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n restore_ckpt_callback = RestoreCkptCallback(pretrian_model_path='./XXXX.ckpt') \n model.fit(x_train, y_train, batch_size=128, epochs=20, callbacks=[restore_ckpt_callback])\n
\n\n

That will be fine. \nI think it is easy to implement and hope it helps.

\n", + "system": "" + }, + { + "instruction": "Make predictions using a tensorflow graph from a keras model", + "input": "", + "output": "

@frankyjuang linked me to here

\n\n

https://github.com/amir-abdi/keras_to_tensorflow

\n\n

and combining this with code from

\n\n

https://github.com/metaflow-ai/blog/blob/master/tf-freeze/load.py

\n\n

and

\n\n

https://github.com/tensorflow/tensorflow/issues/675

\n\n

I have found a solution to both predicting using a tf graph and creating the jacobian function:

\n\n
import tensorflow as tf\nimport numpy as np\n\n# Create function to convert saved keras model to tensorflow graph\ndef convert_to_pb(weight_file,input_fld='',output_fld=''):\n\n    import os\n    import os.path as osp\n    from tensorflow.python.framework import graph_util\n    from tensorflow.python.framework import graph_io\n    from keras.models import load_model\n    from keras import backend as K\n\n\n    # weight_file is a .h5 keras model file\n    output_node_names_of_input_network = [\"pred0\"] \n    output_node_names_of_final_network = 'output_node'\n\n    # change filename to a .pb tensorflow file\n    output_graph_name = weight_file[:-2]+'pb'\n    weight_file_path = osp.join(input_fld, weight_file)\n\n    net_model = load_model(weight_file_path)\n\n    num_output = len(output_node_names_of_input_network)\n    pred = [None]*num_output\n    pred_node_names = [None]*num_output\n\n    for i in range(num_output):\n        pred_node_names[i] = output_node_names_of_final_network+str(i)\n        pred[i] = tf.identity(net_model.output[i], name=pred_node_names[i])\n\n    sess = K.get_session()\n\n    constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), pred_node_names)\n    graph_io.write_graph(constant_graph, output_fld, output_graph_name, as_text=False)\n    print('saved the constant graph (ready for inference) at: ', osp.join(output_fld, output_graph_name))\n\n    return output_fld+output_graph_name\n
\n\n

Call:

\n\n
tf_model_path = convert_to_pb('model_file.h5','/model_dir/','/model_dir/')\n
\n\n

Create function to load the tf model as a graph:

\n\n
def load_graph(frozen_graph_filename):\n    # We load the protobuf file from the disk and parse it to retrieve the \n    # unserialized graph_def\n    with tf.gfile.GFile(frozen_graph_filename, \"rb\") as f:\n        graph_def = tf.GraphDef()\n        graph_def.ParseFromString(f.read())\n\n    # Then, we can use again a convenient built-in function to import a graph_def into the \n    # current default Graph\n    with tf.Graph().as_default() as graph:\n        tf.import_graph_def(\n            graph_def, \n            input_map=None, \n            return_elements=None, \n            name=\"prefix\", \n            op_dict=None, \n            producer_op_list=None\n        )\n\n    input_name = graph.get_operations()[0].name+':0'\n    output_name = graph.get_operations()[-1].name+':0'\n\n    return graph, input_name, output_name\n
\n\n

Create a function to make model predictions using the tf graph

\n\n
def predict(model_path, input_data):\n    # load tf graph\n    tf_model,tf_input,tf_output = load_graph(model_path)\n\n    # Create tensors for model input and output\n    x = tf_model.get_tensor_by_name(tf_input)\n    y = tf_model.get_tensor_by_name(tf_output) \n\n    # Number of model outputs\n    num_outputs = y.shape.as_list()[0]\n    predictions = np.zeros((input_data.shape[0],num_outputs))\n    for i in range(input_data.shape[0]):        \n        with tf.Session(graph=tf_model) as sess:\n            y_out = sess.run(y, feed_dict={x: input_data[i:i+1]})\n            predictions[i] = y_out\n\n    return predictions\n
\n\n

Make predictions:

\n\n
tf_predictions = predict(tf_model_path,test_data)\n
\n\n

Jacobian function:

\n\n
def compute_jacobian(model_path,input_data):\n\n    tf_model,tf_input,tf_output = load_graph(model_path)    \n\n    x = tf_model.get_tensor_by_name(tf_input)\n    y = tf_model.get_tensor_by_name(tf_output)\n    y_list = tf.unstack(y)\n    num_outputs = y.shape.as_list()[0]\n    jacobian = np.zeros((num_outputs,input_data.shape[0],input_data.shape[1]))\n    for i in range(input_data.shape[0]):\n        with tf.Session(graph=tf_model) as sess:\n            y_out = sess.run([tf.gradients(y_, x)[0] for y_ in y_list], feed_dict={x: input_data[i:i+1]})\n            jac_temp = np.asarray(y_out)\n        jacobian[:,i:i+1,:]=jac_temp[:,:,:,0]\n    return jacobian\n
\n\n

Compute Jacobian Matrix:

\n\n
jacobians = compute_jacobian(tf_model_path,test_data)\n
\n", + "system": "" + }, + { + "instruction": "How to display custom images in TensorBoard using Keras?", + "input": "", + "output": "

So, the following solution works well for me:

\n\n
import tensorflow as tf\n\ndef make_image(tensor):\n    \"\"\"\n    Convert an numpy representation image to Image protobuf.\n    Copied from https://github.com/lanpa/tensorboard-pytorch/\n    \"\"\"\n    from PIL import Image\n    height, width, channel = tensor.shape\n    image = Image.fromarray(tensor)\n    import io\n    output = io.BytesIO()\n    image.save(output, format='PNG')\n    image_string = output.getvalue()\n    output.close()\n    return tf.Summary.Image(height=height,\n                         width=width,\n                         colorspace=channel,\n                         encoded_image_string=image_string)\n\nclass TensorBoardImage(keras.callbacks.Callback):\n    def __init__(self, tag):\n        super().__init__() \n        self.tag = tag\n\n    def on_epoch_end(self, epoch, logs={}):\n        # Load image\n        img = data.astronaut()\n        # Do something to the image\n        img = (255 * skimage.util.random_noise(img)).astype('uint8')\n\n        image = make_image(img)\n        summary = tf.Summary(value=[tf.Summary.Value(tag=self.tag, image=image)])\n        writer = tf.summary.FileWriter('./logs')\n        writer.add_summary(summary, epoch)\n        writer.close()\n\n        return\n\ntbi_callback = TensorBoardImage('Image Example')\n
\n\n

Just pass the callback to fit or fit_generator.

\n\n

Note that you can also run some operations using the model inside the callback. For example, you may run the model on some images to check its performance.

\n\n

\"screen\"

\n", + "system": "" + }, + { + "instruction": "Theano with Keras on Raspberry Pi", + "input": "", + "output": "

If you had provided the version of python it would have been useful. If you are using python3.7 try reverting back to python3.6 because keras has not yet caught up to the development and there are a lot of problems installing tensorflow with keras on python3.7. I am putting emphasis on version here because I recently faced same problem installing using conda and I realised the issue was python version.

\n\n

But I also had problems getting tensorflow to work on PI. But I used direct installation using pip from ubuntu and not miniconda and it worked. The way that Google Tensorflow team itself mentions is best is to actually build tensorflow from source by following instructions from this link.\nhttps://www.tensorflow.org/install/source_rpi

\n\n

So try to downgrade the version of python to 3.6 or less if you can and try to install using pip or build from source using python3.6 or 3.7.

\n", + "system": "" + }, + { + "instruction": "Data Augmentation Image Data Generator Keras Semantic Segmentation", + "input": "", + "output": "

Yes you can. Here's an example from Keras's docs. You zip together two generators seeded with the same seeds and the fit_generator them.\nhttps://keras.io/preprocessing/image/

\n\n
# we create two instances with the same arguments \ndata_gen_args = dict(featurewise_center=True,\n                     featurewise_std_normalization=True,\n                     rotation_range=90.,\n                     width_shift_range=0.1,\n                     height_shift_range=0.1,\n                     zoom_range=0.2) \nimage_datagen = ImageDataGenerator(**data_gen_args) \nmask_datagen = ImageDataGenerator(**data_gen_args)\n\n# Provide the same seed and keyword arguments to the fit and flow methods seed = 1 \nimage_datagen.fit(images, augment=True, seed=seed) \nmask_datagen.fit(masks, augment=True, seed=seed)\n\nimage_generator = image_datagen.flow_from_directory(\n    'data/images',\n    class_mode=None,\n    seed=seed)\n\nmask_generator = mask_datagen.flow_from_directory(\n    'data/masks',\n    class_mode=None,\n    seed=seed)\n\n# combine generators into one which yields image and masks \ntrain_generator = zip(image_generator, mask_generator)\n\nmodel.fit_generator(\n    train_generator,\n    samples_per_epoch=2000,\n    nb_epoch=50)\n
\n", + "system": "" + }, + { + "instruction": "Keras, sparse matrix issue", + "input": "", + "output": "

Here is my solution.

\n\n
def batch_generator(X, y, batch_size):\n    number_of_batches = samples_per_epoch/batch_size\n    counter=0\n    shuffle_index = np.arange(np.shape(y)[0])\n    np.random.shuffle(shuffle_index)\n    X =  X[shuffle_index, :]\n    y =  y[shuffle_index]\n    while 1:\n        index_batch = shuffle_index[batch_size*counter:batch_size*(counter+1)]\n        X_batch = X[index_batch,:].todense()\n        y_batch = y[index_batch]\n        counter += 1\n        yield(np.array(X_batch),y_batch)\n        if (counter < number_of_batches):\n            np.random.shuffle(shuffle_index)\n            counter=0\n
\n\n

In my case, X - sparse matrix, y - array.

\n", + "system": "" + }, + { + "instruction": "module 'keras.engine' has no attribute 'Layer'", + "input": "", + "output": "

For lines where you are using Layers like ProposalLayer(KE.Layer)

\n

Instead of using KE.Layer do

\n
import keras.layers as KL\n
\n

and replace all instances of KE by KL

\n", + "system": "" + }, + { + "instruction": "Should the custom loss function in Keras return a single loss value for the batch or an arrary of losses for every sample in the training batch?", + "input": "", + "output": "

Actually, as far as I know, the shape of return value of the loss function is not important, i.e. it could be a scalar tensor or a tensor of one or multiple values per sample. The important thing is how it should be reduced to a scalar value so that it could be used in optimization process or shown to the user. For that, you can check the reduction types in Reduction documentation.

\n

Further, here is what the compile method documentation says about the loss argument, partially addressing this point:

\n
\n

loss: String (name of objective function), objective function or tf.keras.losses.Loss instance. See tf.keras.losses. An objective function is any callable with the signature loss = fn(y_true,y_pred), where y_true = ground truth values with shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]. y_pred = predicted values with shape = [batch_size, d0, .. dN]. It returns a weighted loss float tensor. If a custom Loss instance is used and reduction is set to NONE, return value has the shape [batch_size, d0, .. dN-1] ie. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses.

\n
\n

In addition, it's worth noting that most of the built-in loss functions in TF/Keras are usually reduced over the last dimension (i.e. axis=-1).

\n
\n

For those who doubt that a custom loss function which returns a scalar value would work: you can run the following snippet and you will see that the model would train and converge properly.

\n
import tensorflow as tf\nimport numpy as np\n\ndef custom_loss(y_true, y_pred):\n    return tf.reduce_sum(tf.square(y_true - y_pred))\n\ninp = tf.keras.layers.Input(shape=(3,))\nout = tf.keras.layers.Dense(3)(inp)\n\nmodel = tf.keras.Model(inp, out)\nmodel.compile(loss=custom_loss, optimizer=tf.keras.optimizers.Adam(lr=0.1))\n\nx = np.random.rand(1000, 3)\ny = x * 10 + 2.5\nmodel.fit(x, y, epochs=20)\n
\n", + "system": "" + }, + { + "instruction": "Why neural network predicts wrong on its own training data?", + "input": "", + "output": "

The OP postulates an interesting finding. Let me simplify the original question as follows.

\n\n

If the model is trained on a particular time series, why can't the model reconstruct previous time series data, which it was already trained on?

\n\n

Well, the answer is embedded in the training progress itself. Since EarlyStopping is used here to avoid overfitting, the best model is saved at epoch=5, where val_loss=0.0030 as mentioned by the OP. At this instance, the training loss is equal to 0.0343, that is, the RMSE of training is 0.185. Since the dataset is scaled using MinMaxScalar, we need to undo the scaling of RMSE to understand what's going on.

\n\n

The minimum and maximum values of the time sequence are found to be 2290 and 3380. Therefore, having 0.185 as the RMSE of training means that, even for the training set, the predicted values may differ from the ground truth values by approximately 0.185*(3380-2290), that is ~200 units on average.

\n\n

This explains why there is a big difference when predicting the training data itself at a previous time step.

\n\n

What should I do to perfectly emulate training data?

\n\n

I asked this question from myself. The simple answer is, make the training loss approaching 0, that is overfit the model.

\n\n

After some training, I realized that a model with only 1 LSTM layer that has 32 cells is not complex enough to reconstruct the training data. Therefore, I have added another LSTM layer as follows.

\n\n
model = Sequential()\nmodel.add(LSTM(32, return_sequences=True, activation = 'sigmoid', input_shape=(x_train.shape[1], x_train.shape[2])))\n# model.add(Dropout(0.2))\n# model.add(BatchNormalization())\nmodel.add(LSTM(units = 64, return_sequences=False,))\nmodel.add(Dense(y_train.shape[1]))\nmodel.compile(optimizer = 'adam', loss = 'mse')\n
\n\n

And the model is trained for 1000 epochs without considering EarlyStopping.

\n\n
model.fit(x_train, y_train, batch_size = 64, epochs = 1000, shuffle = True, validation_data = (x_test, y_test))\n
\n\n

At the end of 1000th epoch we have a training loss of 0.00047 which is much lower than the training loss in your case. So we would expect the model to reconstruct the training data better. Following is the prediction plot for Apr 2-8.

\n\n

\"prediction\"

\n\n

A Final Note:

\n\n

Training on a particular database does not necessarily mean that the model should be able to perfectly reconstruct the training data. Especially, when the methods such as early stopping, regularization and dropout are introduced to avoid overfitting, the model tends to be more generalizable rather than memorizing training data.

\n", + "system": "" + }, + { + "instruction": "confusion matrix error "Classification metrics can't handle a mix of multilabel-indicator and multiclass targets"", + "input": "", + "output": "

Confusion matrix needs both labels & predictions as single-digits, not as one-hot encoded vectors; although you have done this with your predictions using model.predict_classes(), i.e.

\n\n\n\n
rounded_predictions = model.predict_classes(test_images, batch_size=128, verbose=0)\nrounded_predictions[1]\n# 2\n
\n\n

your test_labels are still one-hot encoded:

\n\n
test_labels[1]\n# array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)\n
\n\n

So, you should convert them too to single-digit ones, as follows:

\n\n
import numpy as np\nrounded_labels=np.argmax(test_labels, axis=1)\nrounded_labels[1]\n# 2\n
\n\n

After which, the confusion matrix should come up OK:

\n\n
from sklearn.metrics import confusion_matrix\ncm = confusion_matrix(rounded_labels, rounded_predictions)\ncm\n# result:\narray([[ 971,    0,    0,    2,    1,    0,    2,    1,    3,    0],\n       [   0, 1121,    2,    1,    0,    1,    3,    0,    7,    0],\n       [   5,    4,  990,    7,    5,    3,    2,    7,    9,    0],\n       [   0,    0,    0,  992,    0,    2,    0,    7,    7,    2],\n       [   2,    0,    2,    0,  956,    0,    3,    3,    2,   14],\n       [   3,    0,    0,   10,    1,  872,    3,    0,    1,    2],\n       [   5,    3,    1,    1,    9,   10,  926,    0,    3,    0],\n       [   0,    7,   10,    1,    0,    2,    0,  997,    1,   10],\n       [   5,    0,    3,    7,    5,    7,    3,    4,  937,    3],\n       [   5,    5,    0,    9,   10,    3,    0,    8,    3,  966]])\n
\n", + "system": "" + }, + { + "instruction": "What is the difference between "predict" and "predict_class" functions in keras?", + "input": "", + "output": "

predict will return the scores of the regression and predict_class will return the class of your prediction. Although it seems similar, there are some differences:

\n

Imagine you are trying to predict if the picture is a dog or a cat (you have a classifier):

\n\n

Now, imagine you are trying to predict house prices (you have a regressor):

\n\n

TL:DR: use predict_class for classifiers (outputs are labels) and use predict for regressions (outputs are non-discrete)

\n

Hope it helps!

\n

For your second question, the answer is here

\n", + "system": "" + }, + { + "instruction": "How training and test data is split - Keras on Tensorflow", + "input": "", + "output": "
    \n
  1. The keras documentation says:"The validation data is selected from the last samples in the x and y data provided, before shuffling.", this means that the shuffle occurs after the split, there is also a boolean parameter called "shuffle" which is set true as default, so if you don't want your data to be shuffled you could just set it to false

    \n
  2. \n
  3. Getting good results on your training data and then getting bad or not so good results on your evaluation data usually means that your model is overfitting, overfit is when your model learns in a very specific scenario and can't achieve good results on new data

    \n
  4. \n
  5. evaluation is to test your model on new data that it has "never seen before", usually you divide your data on training and test, but sometimes you might also want to create a third group of data, because if you just adjust your model to obtain better and better results on your test data this in some way is like cheating because in some way you are telling your model how is the data you are going to use for evaluation and this could cause overfitting

    \n
  6. \n
\n

Also, if you want to split your data without using keras, I recommend you to use the sklearn train_test_split() function.

\n

it's easy to use and it looks like this:

\n
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)\n
\n", + "system": "" + }, + { + "instruction": "Keras AttributeError: 'list' object has no attribute 'ndim'", + "input": "", + "output": "

model.fit expects x and y to be numpy array. Seems like you pass a list, it tried to get shape of input by reading ndim attribute of numpy array and failed.

\n\n

You can simply transform it using np.array:

\n\n
import numpy as np\n...\nmodel.fit(np.array(train_X),np.array(train_Y), epochs=20, batch_size=10)\n
\n", + "system": "" + }, + { + "instruction": "How can I get a Keras models' history after loading it from a file in Python?", + "input": "", + "output": "

Unfortunately it seems that Keras hasn't implemented the possibility of loading the history directly from a loaded model. Instead you have to set it up in advance. This is how I solved it using CSVLogger (it's actually very convenient storing the entire training history in a separate file. This way you can always come back later and plot whatever history you want instead of being dependent on a variable you can easily lose stored in the RAM):

\n

First we have to set up the logger before initiating the training.

\n
from keras.callbacks import CSVLogger\n\ncsv_logger = CSVLogger('training.log', separator=',', append=False)\nmodel.fit(X_train, Y_train, callbacks=[csv_logger])\n
\n

The entire log history will now be stored in the file 'training.log' (the same information you would get, by in your case, calling H.history). When the training is finished, the next step would simply be to load the data stored in this file. You can do that with pandas read_csv:

\n
import pandas as pd\nlog_data = pd.read_csv('training.log', sep=',', engine='python')\n
\n

From here on you can treat the data stored in log_data just as you would by loading it from K.history.

\n

More information in Keras callbacks docs.

\n", + "system": "" + }, + { + "instruction": "How can I get the number of trainable parameters of a model in Keras?", + "input": "", + "output": "
from keras import backend as K\n\ntrainable_count = int(\n    np.sum([K.count_params(p) for p in set(model.trainable_weights)]))\nnon_trainable_count = int(\n    np.sum([K.count_params(p) for p in set(model.non_trainable_weights)]))\n\nprint('Total params: {:,}'.format(trainable_count + non_trainable_count))\nprint('Trainable params: {:,}'.format(trainable_count))\nprint('Non-trainable params: {:,}'.format(non_trainable_count))\n
\n\n

The above snippet can be discovered in the end of layer_utils.print_summary() definition, which summary() is calling.

\n\n
\n\n

Edit: more recent version of Keras has a helper function count_params() for this purpose:

\n\n
from keras.utils.layer_utils import count_params\n\ntrainable_count = count_params(model.trainable_weights)\nnon_trainable_count = count_params(model.non_trainable_weights)\n
\n", + "system": "" + }, + { + "instruction": "Keras Masking for RNN with Varying Time Steps", + "input": "", + "output": "

The way you implemented masking should be correct. If you have data with the shape (samples, timesteps, features), and you want to mask timesteps lacking data with a zero mask of the same size as the features argument, then you add Masking(mask_value=0., input_shape=(timesteps, features)). See here: keras.io/layers/core/#masking

\n\n

Your model could potentially be too simple, and/or your number of epochs could be insufficient for the model to differentiate between all of your classes. Try this model:

\n\n
model = Sequential()\nmodel.add(Masking(mask_value=0., input_shape=(max_time, 24)))\nmodel.add(LSTM(256, input_dim=24))\nmodel.add(Dense(1024))\nmodel.add(Dense(2))\nmodel.add(Activation(activate))\nmodel.compile(loss=weibull_loglik_discrete, optimizer=RMSprop(lr=.01))\nmodel.fit(train_x, train_y, nb_epoch=100, batch_size=1000, verbose=2, validation_data=(test_x, test_y)) \n
\n\n

If that does not work, try doubling the epochs a few times (e.g. 200, 400) and see if that improves the results.

\n", + "system": "" + }, + { + "instruction": "Using pre-trained word2vec with LSTM for word generation", + "input": "", + "output": "\n

I've created a gist with a simple generator that builds on top of your initial idea: it's an LSTM network wired to the pre-trained word2vec embeddings, trained to predict the next word in a sentence. The data is the list of abstracts from arXiv website.

\n

I'll highlight the most important parts here.

\n

Gensim Word2Vec

\n

Your code is fine, except for the number of iterations to train it. The default iter=5 seems rather low. Besides, it's definitely not the bottleneck -- LSTM training takes much longer. iter=100 looks better.

\n
word_model = gensim.models.Word2Vec(sentences, vector_size=100, min_count=1, \n                                    window=5, iter=100)\npretrained_weights = word_model.wv.syn0\nvocab_size, emdedding_size = pretrained_weights.shape\nprint('Result embedding shape:', pretrained_weights.shape)\nprint('Checking similar words:')\nfor word in ['model', 'network', 'train', 'learn']:\n  most_similar = ', '.join('%s (%.2f)' % (similar, dist) \n                           for similar, dist in word_model.most_similar(word)[:8])\n  print('  %s -> %s' % (word, most_similar))\n\ndef word2idx(word):\n  return word_model.wv.vocab[word].index\ndef idx2word(idx):\n  return word_model.wv.index2word[idx]\n
\n

The result embedding matrix is saved into pretrained_weights array which has a shape (vocab_size, emdedding_size).

\n

Keras model

\n

Your code is almost correct, except for the loss function. Since the model predicts the next word, it's a classification task, hence the loss should be categorical_crossentropy or sparse_categorical_crossentropy. I've chosen the latter for efficiency reasons: this way it avoids one-hot encoding, which is pretty expensive for a big vocabulary.

\n
model = Sequential()\nmodel.add(Embedding(input_dim=vocab_size, output_dim=emdedding_size, \n                    weights=[pretrained_weights]))\nmodel.add(LSTM(units=emdedding_size))\nmodel.add(Dense(units=vocab_size))\nmodel.add(Activation('softmax'))\nmodel.compile(optimizer='adam', loss='sparse_categorical_crossentropy')\n
\n

Note passing the pre-trained weights to weights.

\n

Data preparation

\n

In order to work with sparse_categorical_crossentropy loss, both sentences and labels must be word indices. Short sentences must be padded with zeros to the common length.

\n
train_x = np.zeros([len(sentences), max_sentence_len], dtype=np.int32)\ntrain_y = np.zeros([len(sentences)], dtype=np.int32)\nfor i, sentence in enumerate(sentences):\n  for t, word in enumerate(sentence[:-1]):\n    train_x[i, t] = word2idx(word)\n  train_y[i] = word2idx(sentence[-1])\n
\n

Sample generation

\n

This is pretty straight-forward: the model outputs the vector of probabilities, of which the next word is sampled and appended to the input. Note that the generated text would be better and more diverse if the next word is sampled, rather than picked as argmax. The temperature based random sampling I've used is described here.

\n
def sample(preds, temperature=1.0):\n  if temperature <= 0:\n    return np.argmax(preds)\n  preds = np.asarray(preds).astype('float64')\n  preds = np.log(preds) / temperature\n  exp_preds = np.exp(preds)\n  preds = exp_preds / np.sum(exp_preds)\n  probas = np.random.multinomial(1, preds, 1)\n  return np.argmax(probas)\n\ndef generate_next(text, num_generated=10):\n  word_idxs = [word2idx(word) for word in text.lower().split()]\n  for i in range(num_generated):\n    prediction = model.predict(x=np.array(word_idxs))\n    idx = sample(prediction[-1], temperature=0.7)\n    word_idxs.append(idx)\n  return ' '.join(idx2word(idx) for idx in word_idxs)\n
\n

Examples of generated text

\n
deep convolutional... -> deep convolutional arithmetic initialization step unbiased effectiveness\nsimple and effective... -> simple and effective family of variables preventing compute automatically\na nonconvex... -> a nonconvex technique compared layer converges so independent onehidden markov\na... -> a function parameterization necessary both both intuitions with technique valpola utilizes\n
\n

Doesn't make too much sense, but is able to produce sentences that look at least grammatically sound (sometimes).

\n

The link to the complete runnable script.

\n", + "system": "" + }, + { + "instruction": "How to convert predicted sequence back to text in keras?", + "input": "", + "output": "

You can use directly the inverse tokenizer.sequences_to_texts function.

\n
    text = tokenizer.sequences_to_texts(<list_of_integer_equivalent_encodings>)\n
\n

I have tested the above and it works as expected.

\n

PS.: Take extra care to make the argument be the list of the integer encodings and not the One Hot ones.

\n", + "system": "" + }, + { + "instruction": "Resizing images in Keras ImageDataGenerator flow methods", + "input": "", + "output": "

flow_from_directory(directory) generates augmented images from directory with arbitrary collection of images. So there is need of parameter target_size to make all images of same shape.

\n\n

While flow(X, y) augments images which are already stored in a sequence in X which is nothing but numpy matrix and can be easily preprocessed/resized before passing to flow. So no need for target_size parameter. As for resizing I prefer using scipy.misc.imresize over PIL.Image resize, or cv2.resize as it can operate on numpy image data.

\n\n
import scipy\nnew_shape = (28,28,3)\nX_train_new = np.empty(shape=(X_train.shape[0],)+new_shape)\nfor idx in xrange(X_train.shape[0]):\n    X_train_new[idx] = scipy.misc.imresize(X_train[idx], new_shape)\n
\n", + "system": "" + }, + { + "instruction": "HOW TO FIX IT? AttributeError: module 'keras.preprocessing.image' has no attribute 'load_img'", + "input": "", + "output": "

Replace:

\n
from keras.preprocessing import image\n
\n

for:

\n
import keras.utils as image\n
\n", + "system": "" + }, + { + "instruction": "Understanding `width_shift_range` and `height_shift_range` arguments in Keras's ImageDataGenerator class", + "input": "", + "output": "

These two argument used by ImageDataGenerator class Which use to preprocess image before feeding it into network. If you want to make your model more robust then small amount of data is not enough. That is where data augmentation come in handy. This are used to generate random data.

\n

width_shift_range: It actually shift the image to the left or right(horizontal shifts). If the value is float and <=1 it will take the percentage of total width as range. Suppose image width is 100px. if width_shift_range = 1.0 it will take -100% to +100% means -100px to +100px. It will shift image randomly between this range. Randomly selected positive value will shift the image to the right side and negative value will shift the image to the left side. We can also do this by selecting pixels.\nif we set width_shift_range = 100 it will have the same effect. More importantly integer value>=1 count pixel as range and float value<=1 count percentage of total width as range. Below images are for width_shift_range = 1.0.

\n

\"For

\n

height_shift_range: It works same as width_shift_range but shift vertically(up or down). Below images are for height_shift_range=0.2,fill_mode="constant"

\n

\"enter

\n

fill_mode: It sets rules for newly shifted pixel in the input area.

\n
## fill_mode: One of {"constant", "nearest", "reflect" or "wrap"}. \n## Points outside the boundaries of the input are filled according to the given mode:\n## "constant": kkkkkkkk|abcd|kkkkkkkk (cval=k)\n## "nearest":  aaaaaaaa|abcd|dddddddd\n## "reflect":  abcddcba|abcd|dcbaabcd\n## "wrap":  abcdabcd|abcd|abcdabcd\n
\n

For more you can check this blog

\n", + "system": "" + }, + { + "instruction": "Keras - Validation Loss and Accuracy stuck at 0", + "input": "", + "output": "\n

Here is a demonstration:

\n
model.fit(X_train, y_train, validation_data=[X_train.to_numpy(), y_train.to_numpy()], \nepochs=10, batch_size=64)\n\nEpoch 1/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.7898 - accuracy: 0.6087 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 2/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.6710 - accuracy: 0.6500 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 3/10\n8/8 [==============================] - 0s 5ms/step - loss: 0.6748 - accuracy: 0.6500 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 4/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.6716 - accuracy: 0.6370 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 5/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.6085 - accuracy: 0.6326 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 6/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.6744 - accuracy: 0.6326 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 7/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.6102 - accuracy: 0.6522 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 8/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.7032 - accuracy: 0.6109 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 9/10\n8/8 [==============================] - 0s 5ms/step - loss: 0.6283 - accuracy: 0.6717 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\nEpoch 10/10\n8/8 [==============================] - 0s 5ms/step - loss: 0.6120 - accuracy: 0.6652 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00\n
\n

So, definitely there is some issue with tensorflow implementation of fit.

\n

I dug up the source, and it seems the part responsible for validation_data:

\n
...\n...\n        # Run validation.\n        if validation_data and self._should_eval(epoch, validation_freq):\n          val_x, val_y, val_sample_weight = (\n              data_adapter.unpack_x_y_sample_weight(validation_data))\n          val_logs = self.evaluate(\n              x=val_x,\n              y=val_y,\n              sample_weight=val_sample_weight,\n              batch_size=validation_batch_size or batch_size,\n              steps=validation_steps,\n              callbacks=callbacks,\n              max_queue_size=max_queue_size,\n              workers=workers,\n              use_multiprocessing=use_multiprocessing,\n              return_dict=True)\n          val_logs = {'val_' + name: val for name, val in val_logs.items()}\n          epoch_logs.update(val_logs)\n
\n

internally calls model.evaluate, as we have already established evaluate works fine, I realized the only culprit could be unpack_x_y_sample_weight.

\n

So, I looked into the implementation:

\n
def unpack_x_y_sample_weight(data):\n  """Unpacks user-provided data tuple."""\n  if not isinstance(data, tuple):\n    return (data, None, None)\n  elif len(data) == 1:\n    return (data[0], None, None)\n  elif len(data) == 2:\n    return (data[0], data[1], None)\n  elif len(data) == 3:\n    return (data[0], data[1], data[2])\n\n  raise ValueError("Data not understood.")\n\n
\n

It's crazy, but if you just pass a tuple instead of a list, everything works fine due to the check inside unpack_x_y_sample_weight. (Your labels are missing after this step and somehow the data is getting fixed inside evaluate, so you're training with no reasonable labels, this seems like a bug but the documentation clearly states to pass tuple)

\n

The following code gives correct validation accuracy and loss:

\n
model.fit(X_train, y_train, validation_data=(X_train.to_numpy(), y_train.to_numpy()), \nepochs=10, batch_size=64)\n\nEpoch 1/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.5832 - accuracy: 0.6696 - val_loss: 0.6892 - val_accuracy: 0.6674\nEpoch 2/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.6385 - accuracy: 0.6804 - val_loss: 0.8984 - val_accuracy: 0.5565\nEpoch 3/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.6822 - accuracy: 0.6391 - val_loss: 0.6556 - val_accuracy: 0.6739\nEpoch 4/10\n8/8 [==============================] - 0s 6ms/step - loss: 0.6276 - accuracy: 0.6609 - val_loss: 1.0691 - val_accuracy: 0.5630\nEpoch 5/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.7048 - accuracy: 0.6239 - val_loss: 0.6474 - val_accuracy: 0.6326\nEpoch 6/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.6545 - accuracy: 0.6500 - val_loss: 0.6659 - val_accuracy: 0.6043\nEpoch 7/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.5796 - accuracy: 0.6913 - val_loss: 0.6891 - val_accuracy: 0.6435\nEpoch 8/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.5915 - accuracy: 0.6891 - val_loss: 0.5307 - val_accuracy: 0.7152\nEpoch 9/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.5571 - accuracy: 0.7000 - val_loss: 0.5465 - val_accuracy: 0.6957\nEpoch 10/10\n8/8 [==============================] - 0s 7ms/step - loss: 0.7133 - accuracy: 0.6283 - val_loss: 0.7046 - val_accuracy: 0.6413\n
\n

So, as this seems to be a bug, I have just opened a relevant issue at Tensorflow Github repo:

\n

https://github.com/tensorflow/tensorflow/issues/39370

\n", + "system": "" + }, + { + "instruction": "Applying callbacks in a custom training loop in Tensorflow 2.0", + "input": "", + "output": "

I've had this problem myself: (1) I want to use a custom training loop; (2) I don't want to lose the bells and whistles Keras gives me in terms of callbacks; (3) I don't want to re-implement them all myself. Tensorflow has a design philosophy of allowing a developer to gradually opt-in to its more low-level APIs. As @HyeonPhilYoun notes in his comment below, the official documentation for tf.keras.callbacks.Callback gives an example of what we're looking for.

\n

The following has worked for me, but can be improved by reverse engineering tf.keras.Model.

\n

The trick is to use tf.keras.callbacks.CallbackList and then manually trigger its lifecycle events from within your custom training loop. This example uses tqdm to give attractive progress bars, but CallbackList has a progress_bar initialization argument that can let you use the defaults. training_model is a typical instance of tf.keras.Model.

\n
from tqdm.notebook import tqdm, trange\n\n# Populate with typical keras callbacks\n_callbacks = []\n\ncallbacks = tf.keras.callbacks.CallbackList(\n    _callbacks, add_history=True, model=training_model)\n\nlogs = {}\ncallbacks.on_train_begin(logs=logs)\n\n# Presentation\nepochs = trange(\n    max_epochs,\n    desc="Epoch",\n    unit="Epoch",\n    postfix="loss = {loss:.4f}, accuracy = {accuracy:.4f}")\nepochs.set_postfix(loss=0, accuracy=0)\n\n# Get a stable test set so epoch results are comparable\ntest_batches = batches(test_x, test_Y)\n\nfor epoch in epochs:\n    callbacks.on_epoch_begin(epoch, logs=logs)\n\n    # I like to formulate new batches each epoch\n    # if there are data augmentation methods in play\n    training_batches = batches(x, Y)\n\n    # Presentation\n    enumerated_batches = tqdm(\n        enumerate(training_batches),\n        desc="Batch",\n        unit="batch",\n        postfix="loss = {loss:.4f}, accuracy = {accuracy:.4f}",\n        position=1,\n        leave=False)\n\n    for (batch, (x, y)) in enumerated_batches:\n        training_model.reset_states()\n        \n        callbacks.on_batch_begin(batch, logs=logs)\n        callbacks.on_train_batch_begin(batch, logs=logs)\n        \n        logs = training_model.train_on_batch(x=x, y=Y, return_dict=True)\n\n        callbacks.on_train_batch_end(batch, logs=logs)\n        callbacks.on_batch_end(batch, logs=logs)\n\n        # Presentation\n        enumerated_batches.set_postfix(\n            loss=float(logs["loss"]),\n            accuracy=float(logs["accuracy"]))\n\n    for (batch, (x, y)) in enumerate(test_batches):\n        training_model.reset_states()\n\n        callbacks.on_batch_begin(batch, logs=logs)\n        callbacks.on_test_batch_begin(batch, logs=logs)\n\n        logs = training_model.test_on_batch(x=x, y=Y, return_dict=True)\n\n        callbacks.on_test_batch_end(batch, logs=logs)\n        callbacks.on_batch_end(batch, logs=logs)\n\n    # Presentation\n    epochs.set_postfix(\n        loss=float(logs["loss"]),\n        accuracy=float(logs["accuracy"]))\n\n    callbacks.on_epoch_end(epoch, logs=logs)\n\n    # NOTE: This is a decent place to check on your early stopping\n    # callback.\n    # Example: use training_model.stop_training to check for early stopping\n\n\ncallbacks.on_train_end(logs=logs)\n\n# Fetch the history object we normally get from keras.fit\nhistory_object = None\nfor cb in callbacks:\n    if isinstance(cb, tf.keras.callbacks.History):\n        history_object = cb\nassert history_object is not None\n
\n", + "system": "" + }, + { + "instruction": "What is meant by sequential model in Keras", + "input": "", + "output": "

There are two ways to build Keras models: sequential and functional.

\n\n

The sequential API allows you to create models layer-by-layer for most problems. It is limited in that it does not allow you to create models that share layers or have multiple inputs or outputs.

\n\n

Alternatively, the functional API allows you to create models that have a lot more flexibility as you can easily define models where layers connect to more than just the previous and next layers. In fact, you can connect layers to (literally) any other layer. As a result, creating complex networks such as siamese networks and residual networks become possible.

\n\n

for more details visit : https://machinelearningmastery.com/keras-functional-api-deep-learning/

\n", + "system": "" + }, + { + "instruction": "Keras layer with int inputs cannot be built", + "input": "", + "output": "

The exception is thrown when building a model with model.build.

\n

model.build function build a model based on given input shape.

\n

The error is raised because when we trying to build a model, it first calls a model with x argument depending on input shape type in the following code

\n
if (isinstance(input_shape, list) and\n    all(d is None or isinstance(d, int) for d in input_shape)):\n  input_shape = tuple(input_shape)\nif isinstance(input_shape, list):\n  x = [base_layer_utils.generate_placeholders_from_shape(shape)\n        for shape in input_shape]\nelif isinstance(input_shape, dict):\n  x = {\n      k: base_layer_utils.generate_placeholders_from_shape(shape)\n      for k, shape in input_shape.items()\n  }\nelse:\n  x = base_layer_utils.generate_placeholders_from_shape(input_shape)\n
\n

x is a TensorFlow placeholder here. So when trying to call a model with x as an input it will pop a TypeError and the result except for block will work and give an error.

\n

I assume your input shape is 16x16. Instead of using self.build([(16,16)]) this, call the model based on real tensor

\n
inputs = tf.keras.Input(shape=(16,))\nself.call(inputs)\n\n
\n", + "system": "" + }, + { + "instruction": "Save and load model optimizer state", + "input": "", + "output": "

You can extract the important lines from the load_model and save_model functions.

\n\n

For saving optimizer states, in save_model:

\n\n\n\n
# Save optimizer weights.\nsymbolic_weights = getattr(model.optimizer, 'weights')\nif symbolic_weights:\n    optimizer_weights_group = f.create_group('optimizer_weights')\n    weight_values = K.batch_get_value(symbolic_weights)\n
\n\n

For loading optimizer states, in load_model:

\n\n
# Set optimizer weights.\nif 'optimizer_weights' in f:\n    # Build train function (to get weight updates).\n    if isinstance(model, Sequential):\n        model.model._make_train_function()\n    else:\n        model._make_train_function()\n\n    # ...\n\n    try:\n        model.optimizer.set_weights(optimizer_weight_values)\n
\n\n

Combining the lines above, here's an example:

\n\n
    \n
  1. First fit the model for 5 epochs.
  2. \n
\n\n\n\n
X, y = np.random.rand(100, 50), np.random.randint(2, size=100)\nx = Input((50,))\nout = Dense(1, activation='sigmoid')(x)\nmodel = Model(x, out)\nmodel.compile(optimizer='adam', loss='binary_crossentropy')\nmodel.fit(X, y, epochs=5)\n\nEpoch 1/5\n100/100 [==============================] - 0s 4ms/step - loss: 0.7716\nEpoch 2/5\n100/100 [==============================] - 0s 64us/step - loss: 0.7678\nEpoch 3/5\n100/100 [==============================] - 0s 82us/step - loss: 0.7665\nEpoch 4/5\n100/100 [==============================] - 0s 56us/step - loss: 0.7647\nEpoch 5/5\n100/100 [==============================] - 0s 76us/step - loss: 0.7638\n
\n\n
    \n
  1. Now save the weights and optimizer states.
  2. \n
\n\n\n\n
model.save_weights('weights.h5')\nsymbolic_weights = getattr(model.optimizer, 'weights')\nweight_values = K.batch_get_value(symbolic_weights)\nwith open('optimizer.pkl', 'wb') as f:\n    pickle.dump(weight_values, f)\n
\n\n
    \n
  1. Rebuild the model in another python session, and load weights.
  2. \n
\n\n\n\n
x = Input((50,))\nout = Dense(1, activation='sigmoid')(x)\nmodel = Model(x, out)\nmodel.compile(optimizer='adam', loss='binary_crossentropy')\n\nmodel.load_weights('weights.h5')\nmodel._make_train_function()\nwith open('optimizer.pkl', 'rb') as f:\n    weight_values = pickle.load(f)\nmodel.optimizer.set_weights(weight_values)\n
\n\n
    \n
  1. Continue model training.
  2. \n
\n\n\n\n
model.fit(X, y, epochs=5)\n\nEpoch 1/5\n100/100 [==============================] - 0s 674us/step - loss: 0.7629\nEpoch 2/5\n100/100 [==============================] - 0s 49us/step - loss: 0.7617\nEpoch 3/5\n100/100 [==============================] - 0s 49us/step - loss: 0.7611\nEpoch 4/5\n100/100 [==============================] - 0s 55us/step - loss: 0.7601\nEpoch 5/5\n100/100 [==============================] - 0s 49us/step - loss: 0.7594\n
\n", + "system": "" + }, + { + "instruction": "Keras: find out the number of layers", + "input": "", + "output": "

model.layers will give you the list of all layers. The number is consequently len(model.layers)

\n", + "system": "" + }, + { + "instruction": "Using multiple validation sets with keras", + "input": "", + "output": "

I ended up writing my own Callback based on the History callback to solve the problem. I'm not sure if this is the best approach but the following Callback records losses and metrics for the training and validation set like the History callback as well as losses and metrics for additional validation sets passed to the constructor.

\n
class AdditionalValidationSets(Callback):\n    def __init__(self, validation_sets, verbose=0, batch_size=None):\n        """\n        :param validation_sets:\n        a list of 3-tuples (validation_data, validation_targets, validation_set_name)\n        or 4-tuples (validation_data, validation_targets, sample_weights, validation_set_name)\n        :param verbose:\n        verbosity mode, 1 or 0\n        :param batch_size:\n        batch size to be used when evaluating on the additional datasets\n        """\n        super(AdditionalValidationSets, self).__init__()\n        self.validation_sets = validation_sets\n        for validation_set in self.validation_sets:\n            if len(validation_set) not in [3, 4]:\n                raise ValueError()\n        self.epoch = []\n        self.history = {}\n        self.verbose = verbose\n        self.batch_size = batch_size\n\n    def on_train_begin(self, logs=None):\n        self.epoch = []\n        self.history = {}\n\n    def on_epoch_end(self, epoch, logs=None):\n        logs = logs or {}\n        self.epoch.append(epoch)\n\n        # record the same values as History() as well\n        for k, v in logs.items():\n            self.history.setdefault(k, []).append(v)\n\n        # evaluate on the additional validation sets\n        for validation_set in self.validation_sets:\n            if len(validation_set) == 3:\n                validation_data, validation_targets, validation_set_name = validation_set\n                sample_weights = None\n            elif len(validation_set) == 4:\n                validation_data, validation_targets, sample_weights, validation_set_name = validation_set\n            else:\n                raise ValueError()\n\n            results = self.model.evaluate(x=validation_data,\n                                          y=validation_targets,\n                                          verbose=self.verbose,\n                                          sample_weight=sample_weights,\n                                          batch_size=self.batch_size)\n\n            for metric, result in zip(self.model.metrics_names,results):\n                valuename = validation_set_name + '_' + metric\n                self.history.setdefault(valuename, []).append(result)\n
\n

which i am then using like this:

\n
history = AdditionalValidationSets([(validation_data2, validation_targets2, 'val2')])\nmodel.fit(train_data, train_targets,\n          epochs=epochs,\n          batch_size=batch_size,\n          validation_data=(validation_data1, validation_targets1),\n          callbacks=[history]\n          shuffle=True)\n
\n", + "system": "" + }, + { + "instruction": "Limit number of cores used in Keras", + "input": "", + "output": "

As @Yu-Yang suggested, I used these lines before each fit:

\n
from keras import backend as K\nK.set_session(K.tf.Session(config=K.tf.ConfigProto(intra_op_parallelism_threads=32,\n                                                   inter_op_parallelism_threads=32)))\n
\n

Check the CPU usage (htop) :\n\"htop

\n", + "system": "" + }, + { + "instruction": "How to interpret Keras model.fit output?", + "input": "", + "output": "

ETA = Estimated Time of Arrival.

\n\n

80 is the size of your training set, 32/80 and 64/80 mean that your batch size is 32 and currently the first batch (or the second batch respectively) is being processed.

\n\n

loss and acc refer to the current loss and accuracy of the training set.\nAt the end of each epoch your trained NN is evaluated against your validation set. This is what val_loss and val_acc refer to.

\n\n

The history object returned by model.fit() is a simple class with some fields, e.g. a reference to the model, a params dict and, most importantly, a history dict. It stores the values of loss and acc (or any other used metric) at the end of each epoch. For 2 epochs it will look like this:

\n\n
{\n    'val_loss': [16.11809539794922, 14.12947562917035],\n    'val_acc': [0.0, 0.0],\n    'loss': [14.890108108520508, 12.088571548461914],\n    'acc': [0.0, 0.25]\n}\n
\n\n

This comes in very handy if you want to visualize your training progress.

\n\n

Note: if your validation loss/accuracy starts increasing while your training loss/accuracy is still decreasing, this is an indicator of overfitting.

\n\n

Note 2: at the very end you should test your NN against some test set that is different from you training set and validation set and thus has never been touched during the training process.

\n", + "system": "" + }, + { + "instruction": "Where can I find the API documentation of the class Input?", + "input": "", + "output": "

That documentation is really hard to go through when you're not used to Keras.

\n

But there are two approaches for building keras models:

\n\n

The Input layer is not used with the Sequential model, only with Model.

\n

Probably, there is no clear documentation because the Input layer does absolutely nothing except defining the shape of the input data to your model. (In fact it creates a "tensor" that you can use as input to other layers).

\n

Imagine you are creating a model taking batches with MNIST data, which has 28x28 pixel images. Your input shape is then (28,28) (see *).

\n

When creating your model, you use Input just to define that:

\n
#inp will be a tensor with shape (?, 28, 28)\ninp = Input((28,28))\n
\n

The following layers will then use this input:

\n
x = SomeKerasLayer(blablabla)(inp)     \nx = SomeOtherLayer(blablabla)(x)    \noutput = TheLastLayer(balblabla)(x)\n
\n

And when you create the model, you define the path that the data will follow, which in this case is from the input to the output:

\n
model = Model(inp,output)\n
\n
\n

With the Model api, it's also possible to create ramifications, multiple inputs and multiple outputs, branches, etc.

\n

In case of having multiple inputs, you'd create several Input layers.

\n

See here for more advanced examples with actual layers: https://keras.io/getting-started/functional-api-guide/

\n
\n

* - This is not a rule. Depending on how you format your input data, this shape can change. There are models that prefer not to care about the 2D information and use a flattened image of shape (784,). Models that will use convolutional layers will often shape the input data as (28,28,1), an image with one channel. (Usually, images have 3 channels, RGB).

\n
\n

Arguments to the Input

\n

The code for the Input method is defined here (December, 22 - 2017)

\n

Possible arguments:

\n\n", + "system": "" + }, + { + "instruction": "Test score vs test accuracy when evaluating model using Keras", + "input": "", + "output": "

For reference, the two relevant parts of the code:

\n
model.compile(loss='binary_crossentropy',\n                  optimizer='adam',\n                  metrics=['accuracy'])\n\nscore, acc = model.evaluate(x_test, y_test,\n                                batch_size=batch_size)\nprint('Test score:', score)\nprint('Test accuracy:', acc)\n
\n

Score is the evaluation of the loss function for a given input.

\n

Training a network is finding parameters that minimize a loss function (or cost function).

\n

The cost function here is the binary_crossentropy.

\n

For a target T and a network output O, the binary crossentropy can defined as: 1

\n
f(T,O) = -(T * log(O) + (1-T) * log(1-O))\n
\n

So the score you see is the evaluation of that.

\n

If you feed it a batch of inputs it will most likely return the mean loss. 2

\n

So yeah, if your model has lower loss (at test time), it should often have lower prediction error.

\n

1 See similar formula at BCELoss on PyTorch (Binary Cross Entropy = BCE)

\n

2 Note how the PyTorch BCE is reduced to a scalar using the default value of reduction ='mean' (i.e. average) by default

\n", + "system": "" + }, + { + "instruction": "facenet triplet loss with keras", + "input": "", + "output": "

What could have happened, other than the learning rate was simply too high, was that an unstable triplet selection strategy had been used, effectively. If, for example, you only use 'hard triplets' (triplets where the a-n distance is smaller than the a-p distance), your network weights might collapse all embeddings to a single point (making the loss always equal to margin (your _alpha), because all embedding distances are zero).

\n\n

This can be fixed by using other kinds of triplets as well (like 'semi-hard triplets' where a-p is smaller than a-n, but the distance between a-p and a-n is still smaller than margin). So maybe if you always checked for this... It is explained in more detail in this blog post: https://omoindrot.github.io/triplet-loss

\n", + "system": "" + }, + { + "instruction": "How to calculate the number of parameters of an LSTM network?", + "input": "", + "output": "

No - the number of parameters of a LSTM layer in Keras equals to:

\n\n
params = 4 * ((size_of_input + 1) * size_of_output + size_of_output^2)\n
\n\n

Additional 1 comes from bias terms. So n is size of input (increased by the bias term) and m is size of output of a LSTM layer.

\n\n

So finally :

\n\n
4 * (4097 * 256 + 256^2) = 4457472\n
\n", + "system": "" + }, + { + "instruction": "Can't import plot_model from keras.utils?", + "input": "", + "output": "

Try to import in the below format

\n
from keras.utils.vis_utils import plot_model\n
\n

This week I've had the same problem, with this import it works.

\n", + "system": "" + }, + { + "instruction": "'Dense' object has no attribute 'op'", + "input": "", + "output": "

You are missing (x) after your output layer. Try

\n
output = Dense(10 , activation = 'softmax')(x)\n
\n", + "system": "" + }, + { + "instruction": "Proper way to feed time-series data to stateful LSTM?", + "input": "", + "output": "

The answer is: depends on problem at hand. For your case of one-step prediction - yes, you can, but you don't have to. But whether you do or not will significantly impact learning.

\n\n
\n\n

Batch vs. sample mechanism (\"see AI\" = see \"additional info\" section)

\n\n

All models treat samples as independent examples; a batch of 32 samples is like feeding 1 sample at a time, 32 times (with differences - see AI). From model's perspective, data is split into the batch dimension, batch_shape[0], and the features dimensions, batch_shape[1:] - the two \"don't talk.\" The only relation between the two is via the gradient (see AI).

\n\n
\n\n

Overlap vs no-overlap batch

\n\n

Perhaps the best approach to understand it is information-based. I'll begin with timeseries binary classification, then tie it to prediction: suppose you have 10-minute EEG recordings, 240000 timesteps each. Task: seizure or non-seizure?

\n\n\n\n

Take 10 samples, shape (240000, 1). How to feed?

\n\n
    \n
  1. (10, 54000, 1), all samples included, slicing as sample[0:54000]; sample[54000:108000] ...
  2. \n
  3. (10, 54000, 1), all samples included, slicing as sample[0:54000]; sample[1:54001] ...
  4. \n
\n\n

Which of the two above do you take? If (2), your neural net will never confuse a seizure for a non-seizure for those 10 samples. But it'll also be clueless about any other sample. I.e., it will massively overfit, because the information it sees per iteration barely differs (1/54000 = 0.0019%) - so you're basically feeding it the same batch several times in a row. Now suppose (3):

\n\n
    \n
  1. (10, 54000, 1), all samples included, slicing as sample[0:54000]; sample[24000:81000] ...
  2. \n
\n\n

A lot more reasonable; now our windows have a 50% overlap, rather than 99.998%.

\n\n
\n\n

Prediction: overlap bad?

\n\n

If you are doing a one-step prediction, the information landscape is now changed:

\n\n\n\n

This dramatically changes your loss function, and what is 'good practice' for minimizing it:

\n\n\n\n
\n\n

What should I do?

\n\n

First, make sure you understand this entire post, as nothing here's really \"optional.\" Then, here's the key about overlap vs no-overlap, per batch:

\n\n
    \n
  1. One sample shifted: model learns to better predict one step ahead for each starting step - meaning: (1) LSTM's robust against initial cell state; (2) LSTM predicts well for any step ahead given X steps behind
  2. \n
  3. Many samples, shifted in later batch: model less likely to 'memorize' train set and overfit
  4. \n
\n\n

Your goal: balance the two; 1's main edge over 2 is:

\n\n\n\n

Should I ever use (2) in prediction?

\n\n\n\n
\n\n

LSTM stateful: may actually be entirely useless for your problem.

\n\n

Stateful is used when LSTM can't process the entire sequence at once, so it's \"split up\" - or when different gradients are desired from backpropagation. With former, the idea is - LSTM considers former sequence in its assessment of latter:

\n\n\n\n

In other words: do not overlap in stateful in separate batches. Same batch is OK, as again, independence - no \"state\" between the samples.

\n\n

When to use stateful: when LSTM benefits from considering previous batch in its assessment of the next. This can include one-step predictions, but only if you can't feed the entire seq at once:

\n\n\n\n
\n\n

When and how does LSTM \"pass states\" in stateful?

\n\n\n\n

Per above, you cannot do this:

\n\n
# sampleNM = sample N at timestep(s) M\nbatch1 = [sample10, sample20, sample30, sample40]\nbatch2 = [sample21, sample41, sample11, sample31]\n
\n\n

This implies 21 causally follows 10 - and will wreck training. Instead do:

\n\n
batch1 = [sample10, sample20, sample30, sample40]\nbatch2 = [sample11, sample21, sample31, sample41]\n
\n\n
\n\n

Batch vs. sample: additional info

\n\n

A \"batch\" is a set of samples - 1 or greater (assume always latter for this answer)\n. Three approaches to iterate over data: Batch Gradient Descent (entire dataset at once), Stochastic GD (one sample at a time), and Minibatch GD (in-between). (In practice, however, we call the last SGD also and only distinguish vs BGD - assume it so for this answer.) Differences:

\n\n\n\n
\n\n

BONUS DIAGRAMS:

\n\n

\n\n
\n\n

\n", + "system": "" + }, + { + "instruction": "Buffered data was truncated after reaching the output size limit", + "input": "", + "output": "

Even if RAM | GPU | DISK on colab is free, this error still comes because there is a limited memory for displaying output of a cell on colab. Assuming the memory limit is around 2Mb to 5Mb when we run many epochs(148+) during training, it tends to fill that memory and hence the output is truncated because there is no more memory left free to display the buffered epochs. However, the machine keeps running in the background and the output is processed but it is not displayed because of the buffered limit. You will still get your desired output.

\n\n

One solution is not to use verbose=1 (use 0 instead).

\n", + "system": "" + }, + { + "instruction": "How to get word vectors from Keras Embedding Layer", + "input": "", + "output": "

You can get the word embeddings by using the get_weights() method of the embedding layer (i.e. essentially the weights of an embedding layer are the embedding vectors):

\n\n
# if you have access to the embedding layer explicitly\nembeddings = emebdding_layer.get_weights()[0]\n\n# or access the embedding layer through the constructed model \n# first `0` refers to the position of embedding layer in the `model`\nembeddings = model.layers[0].get_weights()[0]\n\n# `embeddings` has a shape of (num_vocab, embedding_dim) \n\n# `word_to_index` is a mapping (i.e. dict) from words to their index, e.g. `love`: 69\nwords_embeddings = {w:embeddings[idx] for w, idx in word_to_index.items()}\n\n# now you can use it like this for example\nprint(words_embeddings['love'])  # possible output: [0.21, 0.56, ..., 0.65, 0.10]\n
\n", + "system": "" + }, + { + "instruction": "Keras: release memory after finish training process", + "input": "", + "output": "

Releasing RAM memory

\n\n

For releasing the RAM memory, just do del Variables as suggested by @nuric in the comment.

\n\n

Releasing GPU memory

\n\n

This is a little bit trickier than releasing the RAM memory. Some people will suggest you the following code (Assuming you are using keras)

\n\n
from keras import backend as K\nK.clear_session()\n
\n\n

However, the above code doesn't work for all people. (Even when you try del Models, it is still not going to work)

\n\n

If the above method doesn't work for you, then try the following (You need to install the numba library first):

\n\n
from numba import cuda\ncuda.select_device(0)\ncuda.close()\n
\n\n

The reason behind it is: Tensorflow is just allocating memory to the GPU, while CUDA is responsible for managing the GPU memory.

\n\n

If CUDA somehow refuses to release the GPU memory after you have cleared all the graph with K.clear_session(), then you can use the cuda library to have a direct control on CUDA to clear up GPU memory.

\n", + "system": "" + }, + { + "instruction": "Convolution2D + LSTM versus ConvLSTM2D", + "input": "", + "output": "\n\n

They are not exactly the same, here is why:

\n\n

1. Use Convolution2D layers and LSTM layers

\n\n

As it is known, Convolution2D serves well for capturing image or spatial features, whilst LSTM are used to detect correlations over time. However, by stacking these kind of layers, the correlation between space and time features may not be captured properly.

\n\n

2. Use ConvLSTM2D

\n\n

To solve this, Xingjian Shi et al. proposed a network structure able to capture spatiotemporal correlations, namely ConvLSTM. In Keras, this is reflected in the ConvLSTM2D class, which computes convolutional operations in both the input and the recurrent transformations.

\n\n

Keras code

\n\n

Too illustrate this, you can see here the LSTM code, if you go to the call method from LSTMCell, you'd only see:

\n\n
    x_i = K.dot(inputs_i, self.kernel_i)\n    x_f = K.dot(inputs_f, self.kernel_f)\n    x_c = K.dot(inputs_c, self.kernel_c)\n    x_o = K.dot(inputs_o, self.kernel_o)\n
\n\n

Instead, the ConvLSTM2DCell class calls:

\n\n
    x_i = self.input_conv(inputs_i, self.kernel_i, self.bias_i, padding=self.padding)\n    x_f = self.input_conv(inputs_f, self.kernel_f, self.bias_f, padding=self.padding)\n    x_c = self.input_conv(inputs_c, self.kernel_c, self.bias_c, padding=self.padding)\n    x_o = self.input_conv(inputs_o, self.kernel_o, self.bias_o, padding=self.padding)\n    h_i = self.recurrent_conv(h_tm1_i, self.recurrent_kernel_i)\n    h_f = self.recurrent_conv(h_tm1_f, self.recurrent_kernel_f)\n    h_c = self.recurrent_conv(h_tm1_c, self.recurrent_kernel_c)\n    h_o = self.recurrent_conv(h_tm1_o, self.recurrent_kernel_o)\n
\n\n

Where:

\n\n
def input_conv(self, x, w, b=None, padding='valid'):\n    conv_out = K.conv2d(x, w, strides=self.strides,\n                        padding=padding,\n                        data_format=self.data_format,\n                        dilation_rate=self.dilation_rate)\n    if b is not None:\n        conv_out = K.bias_add(conv_out, b,\n                              data_format=self.data_format)\n    return conv_out\n\ndef recurrent_conv(self, x, w):\n    conv_out = K.conv2d(x, w, strides=(1, 1),\n                        padding='same',\n                        data_format=self.data_format)\n    return conv_out\n
\n\n

In LSTM, the equivalent for h_x (recurrent transformations) would be:

\n\n
K.dot(h_tm1_x, self.recurrent_kernel_x)\n
\n\n

Instead of ConvLSTM2D's:

\n\n
self.recurrent_conv(h_tm1_x, self.recurrent_kernel_x)\n
\n\n

These kind of transformations could not be computed with stacked Conv2D and LSTM layers.

\n", + "system": "" + }, + { + "instruction": "Backward propagation in Keras?", + "input": "", + "output": "

You simply don't. (Late edit: except when you are creating custom training loops, only for advanced uses)

\n

Keras does backpropagation automatically. There's absolutely nothing you need to do for that except for training the model with one of the fit methods.

\n

You just need to take care of a few things:

\n\n

This is all you need to have the automatic backpropagation working properly.

\n

If your layers don't have trainable weights, you don't need custom layers, create Lambda layers instead (only calculations, no trainable weights).

\n", + "system": "" + }, + { + "instruction": "float16 vs float32 for convolutional neural networks", + "input": "", + "output": "

Surprisingly, it's totally OK to use 16 bits, even not just for fun, but in production as well. For example, in this video Jeff Dean talks about 16-bit calculations at Google, around 52:00. A quote from the slides:

\n\n
\n

Neural net training very tolerant of reduced precision

\n
\n\n

Since GPU memory is the main bottleneck in ML computation, there has been a lot of research on precision reduction. E.g.

\n\n\n\n

Of course, I can imagine some networks may require high precision for training, but I would recommend at least to try 16 bits for training a big network and switch to 32 bits if it proves to work worse.

\n", + "system": "" + }, + { + "instruction": "Keras: difference of InputLayer and Input", + "input": "", + "output": "\n\n

You can only call layers passing tensors to them.

\n\n

The idea is:

\n\n
outputTensor = SomeLayer(inputTensor)\n
\n\n

So, only Input can be passed because it's a tensor.

\n\n

Honestly, I have no idea about the reason for the existence of InputLayer. Maybe it's supposed to be used internally. I never used it, and it seems I'll never need it.

\n", + "system": "" + }, + { + "instruction": "Keras retrieve value of node before activation function", + "input": "", + "output": "

Since you're using get_value(), I'll assume that you're using Theano backend. To get the value of the node before the sigmoid activation, you can traverse the computation graph.

\n\n
\n

The graph can be traversed starting from outputs (the result of some computation) down to its inputs using the owner field.

\n
\n\n

In your case, what you want is the input x of the sigmoid activation op. The output of the sigmoid op is model.output. Putting these together, the variable x is model.output.owner.inputs[0].

\n\n

If you print out this value, you'll see Elemwise{add,no_inplace}.0, which is an element-wise addition op. It can be verified from the source code of Dense.call():

\n\n
def call(self, inputs):\n    output = K.dot(inputs, self.kernel)\n    if self.use_bias:\n        output = K.bias_add(output, self.bias)\n    if self.activation is not None:\n        output = self.activation(output)\n    return output\n
\n\n

The input to the activation function is the output of K.bias_add().

\n\n

With a small modification of your code, you can get the value of the node before activation:

\n\n
x = model.output.owner.inputs[0]\nfunc = K.function([model.input] + [K.learning_phase()], [x])\nprint func([test_input, 0.])\n
\n\n

For anyone using TensorFlow backend: use x = model.output.op.inputs[0] instead.

\n", + "system": "" + }, + { + "instruction": "Keras: how to get tensor dimensions inside custom loss?", + "input": "", + "output": "

Two things here:

\n\n
    \n
  1. If you want to get a tensor shape you should use int_shape function from keras.backend.
  2. \n
  3. The first dimension is set to be a batch dimension so int_shape(y_true)[0] will return you a batch size. You should use int_shape(y_true)[1].
  4. \n
\n", + "system": "" + }, + { + "instruction": "What is the relation between validation_data and validation_split in Keras' fit function?", + "input": "", + "output": "

No, everything is correct. One potential reason behind this separation is that sometimes people have training and validation data separately (in many academic datasets) and sometimes you have all the data and can split it anyway you want.

\n", + "system": "" + }, + { + "instruction": "How to have parallel convolutional layers in keras?", + "input": "", + "output": "

Here is an example of designing a network of parallel convolution and sub sampling layers in keras version 2. I hope this resolves your problem.

\n\n
rows, cols = 100, 15\ndef create_convnet(img_path='network_image.png'):\n    input_shape = Input(shape=(rows, cols, 1))\n\n    tower_1 = Conv2D(20, (100, 5), padding='same', activation='relu')(input_shape)\n    tower_1 = MaxPooling2D((1, 11), strides=(1, 1), padding='same')(tower_1)\n\n    tower_2 = Conv2D(20, (100, 7), padding='same', activation='relu')(input_shape)\n    tower_2 = MaxPooling2D((1, 9), strides=(1, 1), padding='same')(tower_2)\n\n    tower_3 = Conv2D(20, (100, 10), padding='same', activation='relu')(input_shape)\n    tower_3 = MaxPooling2D((1, 6), strides=(1, 1), padding='same')(tower_3)\n\n    merged = keras.layers.concatenate([tower_1, tower_2, tower_3], axis=1)\n    merged = Flatten()(merged)\n\n    out = Dense(200, activation='relu')(merged)\n    out = Dense(num_classes, activation='softmax')(out)\n\n    model = Model(input_shape, out)\n    plot_model(model, to_file=img_path)\n    return model\n
\n\n

The image of this network will look like \n\"enter

\n", + "system": "" + }, + { + "instruction": "Is Keras thread safe?", + "input": "", + "output": "

Yes, Keras is thread safe, if you pay a little attention to it.

\n\n

In fact, in reinforcement learning there is an algorithm called Asynchronous Advantage Actor Critics (A3C) where each agent relies on the same neural network to tell them what they should do in a given state. In other words, each thread calls model.predict concurrently as in your problem. An example implementation with Keras of it is here.

\n\n

You should, however, pay extra attention to this line if you looked into the code:\nmodel._make_predict_function() # have to initialize before threading

\n\n

This is never mentioned in the Keras docs, but its necessary to make it work concurrently. In short, _make_predict_function is a function that compiles the predict function. In multi thread setting, you have to manually call this function to compile predict in advance, otherwise the predict function will not be compiled until you run it the first time, which will be problematic when many threading calling it at once. You can see a detailed explanation here.

\n\n

I have not met any other issues with multi threading in Keras till now.

\n", + "system": "" + }, + { + "instruction": "How to pass a parameter to Scikit-Learn Keras model function", + "input": "", + "output": "

You can add an input_dim keyword argument to the KerasClassifier constructor:

\n\n
model = KerasClassifier(build_fn=create_model, input_dim=5, nb_epoch=150, batch_size=10, verbose=0)\n
\n", + "system": "" + }, + { + "instruction": "input dimensions to a one dimensional convolutional network in keras", + "input": "", + "output": "

The reason why it look like this is that Keras designer intended to make 1-dimensional convolutional framework to be interpreted as a framework to deal with sequences. To fully understand the difference - try to imagine that you have a sequence of a multiple feature vectors. Then your output will be at least two dimensional - where first dimension is connected with time and other dimensions are connected with features. 1-dimensional convolutional framework was designed to in some way bold this time dimension and try to find the reoccuring patterns in data - rather than performing a classical multidimensional convolutional transformation.

\n\n

In your case you must simply reshape your data to have shape (dataset_size, 101, 1) - because you have only one feature. It could be easly done using numpy.reshape function. To understand what does a new step mean - you must understand that you are doing the convolution over time - so you change the temporal structure of your data - which lead to new time-connected structure. In order to get your data to a format which is suitable for dense / static layers use keras.layers.flatten layer - the same as in classic convolutional case.

\n\n

UPDATE: As I mentioned before - the first dimension of input is connected with time. So the difference between (1, 101) and (101, 1) lies in that in first case you have one time step with 101 features and in second - 101 timesteps with 1 feature. The problem which you mentioned after your first change has its origin in making pooling with size 2 on such input. Having only one timestep - you cannot pool any value on a time window of size 2 - simply because there is not enough timesteps to do that.

\n", + "system": "" + }, + { + "instruction": "What is the parameter "max_q_size" used for in "model.fit_generator"?", + "input": "", + "output": "

This simply defines the maximum size of the internal training queue which is used to \"precache\" your samples from generator. It is used during generation of the the queues

\n\n
def generator_queue(generator, max_q_size=10,\n                    wait_time=0.05, nb_worker=1):\n    '''Builds a threading queue out of a data generator.\n    Used in `fit_generator`, `evaluate_generator`, `predict_generator`.\n    '''\n    q = queue.Queue()\n    _stop = threading.Event()\n\n    def data_generator_task():\n        while not _stop.is_set():\n            try:\n                if q.qsize() < max_q_size:\n                    try:\n                        generator_output = next(generator)\n                    except ValueError:\n                        continue\n                    q.put(generator_output)\n                else:\n                    time.sleep(wait_time)\n            except Exception:\n                _stop.set()\n                raise\n\n    generator_threads = [threading.Thread(target=data_generator_task)\n                         for _ in range(nb_worker)]\n\n    for thread in generator_threads:\n        thread.daemon = True\n        thread.start()\n\n    return q, _stop\n
\n\n

In other words you have a thread filling the queue up to given, maximum capacity directly from your generator, while (for example) training routine consumes its elements (and sometimes waits for the completion)

\n\n
 while samples_seen < samples_per_epoch:\n     generator_output = None\n     while not _stop.is_set():\n         if not data_gen_queue.empty():\n             generator_output = data_gen_queue.get()\n             break\n         else:\n             time.sleep(wait_time)\n
\n\n

and why default of 10? No particular reason, like most of the defaults - it simply makes sense, but you could use different values too.

\n\n

Construction like this suggests, that authors thought about expensive data generators, which might take time to execture. For example consider downloading data over a network in generator call - then it makes sense to precache some next batches, and download next ones in parallel for the sake of efficiency and to be robust to network errors etc.

\n", + "system": "" + }, + { + "instruction": "How can I stop Keras from printing after calling model.predict", + "input": "", + "output": "

As mentioned by Gerry P, to prevent Keras from printing the output of model.predict(), set the verbose argument to 0 as follows:

\n
agent.model.predict(np.array([0,0,0,0]).reshape(1,4),verbose = 0)\n
\n

Reference: Keras documentarion.

\n", + "system": "" + }, + { + "instruction": "ValueError: Unknown layer: Functional", + "input": "", + "output": "

The solution to this error is very simple, ex. the reason is that you have trained the model on version '2.3.0' of Tensorflow & '2.4.3' of Keras (On Colab or local). and now you are accessing the saved model(.h5) via another version of Keras & TensorFlow. It will give you the error. The solution is that re-trained model with upgraded versions or downgrades your TF&Keras to the same version as on which model is trained.

\n", + "system": "" + }, + { + "instruction": "Saving Keras models with Custom Layers", + "input": "", + "output": "

Correction number 1 is to use Custom_Objects while loading the Saved Model i.e., replace the code,

\n\n
new_model = tf.keras.models.load_model('model.h5') \n
\n\n

with

\n\n
new_model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer})\n
\n\n

Since we are using Custom Layers to build the Model and before Saving it, we should use Custom Objects while Loading it.

\n\n

Correction number 2 is to add **kwargs in the __init__ function of the Custom Layer like

\n\n
def __init__(self, k, name=None, **kwargs):\n        super(CustomLayer, self).__init__(name=name)\n        self.k = k\n        super(CustomLayer, self).__init__(**kwargs)\n
\n\n

Complete working code is shown below:

\n\n
import tensorflow as tf\n\nclass CustomLayer(tf.keras.layers.Layer):\n    def __init__(self, k, name=None, **kwargs):\n        super(CustomLayer, self).__init__(name=name)\n        self.k = k\n        super(CustomLayer, self).__init__(**kwargs)\n\n\n    def get_config(self):\n        config = super(CustomLayer, self).get_config()\n        config.update({\"k\": self.k})\n        return config\n\n    def call(self, input):\n        return tf.multiply(input, 2)\n\nmodel = tf.keras.models.Sequential([\n    tf.keras.Input(name='input_layer', shape=(10,)),\n    CustomLayer(10, name='custom_layer'),\n    tf.keras.layers.Dense(1, activation='sigmoid', name='output_layer')\n])\ntf.keras.models.save_model(model, 'model.h5')\nnew_model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer})\n\nprint(new_model.summary())\n
\n\n

Output of the above code is shown below:

\n\n
WARNING:tensorflow:No training configuration found in the save file, so the model was *not* compiled. Compile it manually.\nModel: \"sequential_1\"\n_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\ncustom_layer_1 (CustomLayer) (None, 10)                0         \n_________________________________________________________________\noutput_layer (Dense)         (None, 1)                 11        \n=================================================================\nTotal params: 11\nTrainable params: 11\nNon-trainable params: 0\n
\n\n

Hope this helps. Happy Learning!

\n", + "system": "" + }, + { + "instruction": "tensorflow:Can save best model only with val_acc available, skipping", + "input": "", + "output": "

I know how frustrating these things can be sometimes..but tensorflow requires that you explicitly write out the name of metric you are wanting to calculate

\n

You will need to actually say 'val_accuracy'

\n
metric = 'val_accuracy'\nModelCheckpoint(filepath=r"C:\\Users\\reda.elhail\\Desktop\\checkpoints\\{}".format(Name), monitor=metric,\n                    verbose=2, save_best_only=True, mode='max')]\n
\n

Hope this helps =)

\n

*** As later noted by BlueTurtle (Give their answer a thumbs up please, likely still beneath this) you also need to use the full metric name to match your model.compile, ModelCheckpoint, and EarlyStopping.

\n", + "system": "" + }, + { + "instruction": "Custom loss function in Keras based on the input data", + "input": "", + "output": "

I have come across 2 solutions to the question you asked.

\n
    \n
  1. You can pass your input (scalar only) as an argument to the custom loss wrapper function.
  2. \n
\n
    def custom_loss(i):\n\n        def loss(y_true, y_pred):\n            return K.mean(K.square(y_pred - y_true), axis=-1) + something with i...\n        return loss\n\n    def baseline_model():\n        # create model\n        i = Input(shape=(5,))\n        x = Dense(5, kernel_initializer='glorot_uniform', activation='linear')(i)\n        o = Dense(1, kernel_initializer='normal', activation='linear')(x)\n        model = Model(i, o)\n        model.compile(loss=custom_loss(i), optimizer=Adam(lr=0.0005))\n        return model\n
\n

This solution is also mentioned in the accepted answer here

\n
    \n
  1. You can pad your label with extra data columns from input and write a custom loss. This is helpful if you just want one/few feature column(s) from your input.
  2. \n
\n
    def custom_loss(data, y_pred):\n\n        y_true = data[:, 0]\n        i = data[:, 1]\n        return K.mean(K.square(y_pred - y_true), axis=-1) + something with i...\n\n\n    def baseline_model():\n        # create model\n        i = Input(shape=(5,))\n        x = Dense(5, kernel_initializer='glorot_uniform', activation='linear')(i)\n        o = Dense(1, kernel_initializer='normal', activation='linear')(x)\n        model = Model(i, o)\n        model.compile(loss=custom_loss, optimizer=Adam(lr=0.0005))\n        return model\n\n\n    model.fit(X, np.append(Y_true, X[:, 0], axis =1), batch_size = batch_size, epochs=90, shuffle=True, verbose=1)\n
\n

This solution can be found also here in this thread.

\n

I have only used the 2nd method when I had to use input feature columns in the loss. The first method can be only used with scalar arguments as mentioned in the comments.

\n", + "system": "" + }, + { + "instruction": "Read only mode in keras", + "input": "", + "output": "

I had a similar issue and solved this way

\n\n

store the graph\\architecture in JSON format and weights in h5 format

\n\n
import json\n\n# lets assume `model` is main model \nmodel_json = model.to_json()\nwith open(\"model_in_json.json\", \"w\") as json_file:\n    json.dump(model_json, json_file)\n\nmodel.save_weights(\"model_weights.h5\")\n
\n\n

then need to load model first to create graph\\architecture and load_weights in model

\n\n
from keras.models import load_model\nfrom keras.models import model_from_json\nimport json\n\nwith open('model_in_json.json','r') as f:\n    model_json = json.load(f)\n\nmodel = model_from_json(model_json)\nmodel.load_weights('model_weights.h5')\n
\n", + "system": "" + }, + { + "instruction": "How to iterate through tensors in custom loss function?", + "input": "", + "output": "

As usual, don't loop. There are severe performance drawbacks and also bugs. Use only backend functions unless totally unavoidable (usually it's not unavoidable)

\n\n
\n\n

Solution for example 3:

\n\n

So, there is a very weird thing there...

\n\n
\n

Do you really want to simply ignore half of your model's predictions? (Example 3)

\n
\n\n

Assuming this is true, just duplicate your tensor in the last dimension, flatten and discard half of it. You have the exact effect you want.

\n\n
def custom_loss(true, pred):\n    n = K.shape(pred)[0:1]\n\n    pred = K.concatenate([pred]*2, axis=-1) #duplicate in the last axis\n    pred = K.flatten(pred)                  #flatten \n    pred = K.slice(pred,                    #take only half (= n samples)\n                   K.constant([0], dtype=\"int32\"), \n                   n) \n\n    return K.abs(true - pred)\n
\n\n

Solution for your loss function:

\n\n

If you have sorted times from greater to lower, just do a cumulative sum.

\n\n
\n

Warning: If you have one time per sample, you cannot train with mini-batches!!!
\n batch_size = len(labels)

\n
\n\n

It makes sense to have time in an additional dimension (many times per sample), as is done in recurrent and 1D conv netoworks. Anyway, considering your example as expressed, that is shape (samples_equal_times,) for yTime:

\n\n
def neg_log_likelihood(yTrue,yPred):\n    yStatus = yTrue[:,0]\n    yTime = yTrue[:,1]    \n    n = K.shape(yTrue)[0]    \n\n\n    #sort the times and everything else from greater to lower:\n    #obs, you can have the data sorted already and avoid doing it here for performance\n\n    #important, yTime will be sorted in the last dimension, make sure its (None,) in this case\n    # or that it's (None, time_length) in the case of many times per sample\n    sortedTime, sortedIndices = tf.math.top_k(yTime, n, True)    \n    sortedStatus = K.gather(yStatus, sortedIndices)\n    sortedPreds = K.gather(yPred, sortedIndices)\n\n    #do the calculations\n    exp = K.exp(sortedPreds)\n    sums = K.cumsum(exp)  #this will have the sum for j >= i in the loop\n    logsums = K.log(sums)\n\n    return K.sum(sortedStatus * sortedPreds - logsums)\n
\n", + "system": "" + }, + { + "instruction": "How to pickle Keras model?", + "input": "", + "output": "

As of now, Keras models are pickle-able. But we still recommend using model.save() to save model to disk.

\n", + "system": "" + }, + { + "instruction": "How to fix AttributeError: module 'numpy' has no attribute 'square'", + "input": "", + "output": "

I removed numpy.py then updated my numpy and it worked!

\n", + "system": "" + }, + { + "instruction": "Cross Validation in Keras", + "input": "", + "output": "
\n

If my goal is to fine-tune the network for the entire dataset

\n
\n\n

It is not clear what you mean by \"fine-tune\", or even what exactly is your purpose for performing cross-validation (CV); in general, CV serves one of the following purposes:

\n\n\n\n

Since you don't define any search grid for hyperparameter selection in your code, it would seem that you are using CV in order to get the expected performance of your model (error, accuracy etc).

\n\n

Anyway, for whatever reason you are using CV, the first snippet is the correct one; your second snippet

\n\n
model = None\nmodel = create_model()\nfor train, test in kFold.split(X, Y):\n    train_evaluate(model, X[train], Y[train], X[test], Y[test])\n
\n\n

will train your model sequentially over the different partitions (i.e. train on partition #1, then continue training on partition #2 etc), which essentially is just training on your whole data set, and it is certainly not cross-validation...

\n\n

That said, a final step after the CV which is often only implied (and frequently missed by beginners) is that, after you are satisfied with your chosen hyperparameters and/or model performance as given by your CV procedure, you go back and train again your model, this time with the entire available data.

\n", + "system": "" + }, + { + "instruction": "Difference between Conv2D and Convolution2D in Keras", + "input": "", + "output": "

From the keras source code, they're the same:
\n(The source code changes from time to time and the line number in the link above might eventually be wrong)

\n\n
# Aliases\n\nConvolution1D = Conv1D\nConvolution2D = Conv2D\nConvolution3D = Conv3D\nSeparableConvolution2D = SeparableConv2D\nConvolution2DTranspose = Conv2DTranspose\nDeconvolution2D = Deconv2D = Conv2DTranspose\nDeconvolution3D = Deconv3D = Conv3DTranspose\n
\n", + "system": "" + }, + { + "instruction": "Preload whole dataset on gpu for training Keras model", + "input": "", + "output": "

You don't have to load the whole data. You can ingest the data piece by piece using the DataSet class.

\n

Tensorflow can take care of loading more data while your gpu is crunching your numbers. You can follow the below steps.

\n
    \n
  1. Convert your dataset into a TFRecord dataset and save it to your disk.
  2. \n
  3. Load this dataset using the TFRecordDataset class
  4. \n
  5. Ingest it into your Kerasmodel.
  6. \n
\n

You can check the example listed here.

\n

Hope this is helpful.

\n", + "system": "" + }, + { + "instruction": "Keras custom loss function: Accessing current input pattern", + "input": "", + "output": "

You can wrap the loss function as a inner function and pass your input tensor to it (as commonly done when passing additional arguments to the loss function).

\n\n
def custom_loss_wrapper(input_tensor):\n    def custom_loss(y_true, y_pred):\n        return K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor)\n    return custom_loss\n\ninput_tensor = Input(shape=(10,))\nhidden = Dense(100, activation='relu')(input_tensor)\nout = Dense(1, activation='sigmoid')(hidden)\nmodel = Model(input_tensor, out)\nmodel.compile(loss=custom_loss_wrapper(input_tensor), optimizer='adam')\n
\n\n

You can verify that input_tensor and the loss value (mostly, the K.mean(input_tensor) part) will change as different X is passed to the model.

\n\n
X = np.random.rand(1000, 10)\ny = np.random.randint(2, size=1000)\nmodel.test_on_batch(X, y)  # => 1.1974642\n\nX *= 1000\nmodel.test_on_batch(X, y)  # => 511.15466\n
\n", + "system": "" + }, + { + "instruction": "How to use predict_generator with ImageDataGenerator?", + "input": "", + "output": "

You can change the value of batch_size in flow_from_directory from default value (which is batch_size=32 ) to batch_size=1. Then set the steps of predict_generator to the total number of your test images. Something like this:

\n\n
test_datagen = ImageDataGenerator(rescale=1./255)\n\ntest_generator = test_datagen.flow_from_directory(\n        test_dir,\n        target_size=(200, 200),\n        color_mode=\"rgb\",\n        shuffle = False,\n        class_mode='categorical',\n        batch_size=1)\n\nfilenames = test_generator.filenames\nnb_samples = len(filenames)\n\npredict = model.predict_generator(test_generator,steps = nb_samples)\n
\n", + "system": "" + }, + { + "instruction": "How to use ModelCheckpoint with custom metrics in Keras?", + "input": "", + "output": "

Yes, it is possible.

\n\n

Define the custom metrics as described in the documentation:

\n\n
import keras.backend as K\n\ndef mean_pred(y_true, y_pred):\n    return K.mean(y_pred)\n\nmodel.compile(optimizer='rmsprop',\n              loss='binary_crossentropy',\n              metrics=['accuracy', mean_pred])\n
\n\n

To check all available metrics:

\n\n
print(model.metrics_names)\n> ['loss', 'acc', 'mean_pred']\n
\n\n

Pass the metric name to ModelCheckpoint through monitor. If you want the metric calculated in the validation, use the val_ prefix.

\n\n
ModelCheckpoint(weights.{epoch:02d}-{val_mean_pred:.2f}.hdf5,\n                monitor='val_mean_pred',\n                save_best_only=True,\n                save_weights_only=True,\n                mode='max',\n                period=1)\n
\n\n

Don't use mode='auto' for custom metrics. Understand why here.

\n\n
\n\n

Why am I answering my own question? Check this.

\n", + "system": "" + }, + { + "instruction": "Keras RNN with LSTM cells for predicting multiple output time series based on multiple intput time series", + "input": "", + "output": "

Initial note. If time series were short (for example T = 30), we wouldn't need stateful LSTM and classic LSTM would work well.\nIn OP question, time series lengths are T=3000, so learning can be very slow with classic LSTM. Learning will be improved by cutting the time series into pieces and using stateful LSTM.

\n\n

Stateful mode with N=batch_size.\nStateful models are tricky with Keras, because you need to be careful on how you cut time series and select batch size. In OP question, sample size is N=100. Since we can accept to train model with batch of one hundred (it is not a large number), we will select batch_size=100.

\n\n

Selecting batch_size=N simplifies the training because you do not need to reset states inside epochs (so no need to write a callback on_batch_begin).

\n\n

It remains the question of cutting the time series. Cutting is a little technical, so I wrote a wrapper function working in all cases.

\n\n
def stateful_cut(arr, batch_size, T_after_cut):\n    if len(arr.shape) != 3:\n        # N: Independent sample size,\n        # T: Time length,\n        # m: Dimension\n        print(\"ERROR: please format arr as a (N, T, m) array.\")\n\n    N = arr.shape[0]\n    T = arr.shape[1]\n\n    # We need T_after_cut * nb_cuts = T\n    nb_cuts = int(T / T_after_cut)\n    if nb_cuts * T_after_cut != T:\n        print(\"ERROR: T_after_cut must divide T\")\n\n    # We need batch_size * nb_reset = N\n    # If nb_reset = 1, we only reset after the whole epoch, so no need to reset\n    nb_reset = int(N / batch_size)\n    if nb_reset * batch_size != N:\n        print(\"ERROR: batch_size must divide N\")\n\n    # Cutting (technical)\n    cut1 = np.split(arr, nb_reset, axis=0)\n    cut2 = [np.split(x, nb_cuts, axis=1) for x in cut1]\n    cut3 = [np.concatenate(x) for x in cut2]\n    cut4 = np.concatenate(cut3)\n    return(cut4)\n
\n\n

From now, it become easy to train the model. Since the OP example is very simple, we do not need additional preprocessing or regularization. I describe how to proceed step by step (for the impatient, whole self-contained code is available at the very end of this post).

\n\n

First we load data and reshape it with wrapper function.

\n\n
import numpy as np\nfrom keras.models import Sequential\nfrom keras.layers import Dense, LSTM, TimeDistributed\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpatches\n\n##\n# Data\n##\nN = X_train.shape[0] # size of samples\nT = X_train.shape[1] # length of each time series\nbatch_size = N # number of time series considered together: batch_size | N\nT_after_cut = 100 # length of each cut part of the time series: T_after_cut | T\ndim_in = X_train.shape[2] # dimension of input time series\ndim_out = y_train.shape[2] # dimension of output time series\n\ninputs, outputs, inputs_test, outputs_test = \\\n  [stateful_cut(arr, batch_size, T_after_cut) for arr in \\\n  [X_train, y_train, X_test, y_test]]\n
\n\n

Then we compile a model with 4 inputs, 3 outputs, and 1 hidden layer containing 10 nodes.

\n\n
##\n# Model\n##\nnb_units = 10\n\nmodel = Sequential()\nmodel.add(LSTM(batch_input_shape=(batch_size, None, dim_in),\n               return_sequences=True, units=nb_units, stateful=True))\nmodel.add(TimeDistributed(Dense(activation='linear', units=dim_out)))\nmodel.compile(loss = 'mse', optimizer = 'rmsprop')\n
\n\n

We train the model without resetting states. We can do it only because we have selected batch_size = N.

\n\n
##\n# Training\n##\nepochs = 100\n\nnb_reset = int(N / batch_size)\nif nb_reset > 1:\n    print(\"ERROR: We need to reset states when batch_size < N\")\n\n# When nb_reset = 1, we do not need to reinitialize states\nhistory = model.fit(inputs, outputs, epochs = epochs, \n                    batch_size = batch_size, shuffle=False,\n                    validation_data=(inputs_test, outputs_test))\n
\n\n

We get evolution of training/test loss as follows:

\n\n

\"training

\n\n

Now, we define a 'mime model' which is stateless but containing our stateful weights. [Why like this? Prediction with stateful model through model.predict needs a complete batch in Keras, but we may not have a complete batch to predict...]

\n\n
## Mime model which is stateless but containing stateful weights\nmodel_stateless = Sequential()\nmodel_stateless.add(LSTM(input_shape=(None, dim_in),\n               return_sequences=True, units=nb_units))\nmodel_stateless.add(TimeDistributed(Dense(activation='linear', units=dim_out)))\nmodel_stateless.compile(loss = 'mse', optimizer = 'rmsprop')\nmodel_stateless.set_weights(model.get_weights())\n
\n\n

Finally, we can show our incredible predictions on our long time series y1, y2 and y3 (blue for true output ; orange for predicted outputs):

\n\n

For y1:\n\"Prediction

\n\n

For y2:\n\"Prediction

\n\n

For y3:\n\"Prediction

\n\n

Conclusion: It works almost perfectly, unless for the 2-3 first dates where the series is unpredictable by definition. We do not observe any burst when going from one batch for the next batch.

\n\n

Much more When N is large, we would like to select batch_size | N with batch_size < N. I have written full code in https://github.com/ahstat/deep-learning/blob/master/rnn/4_lagging_and_stateful.py (Part C and D). This github path also shows efficiency of classic LSTM for short time series (Part A), and inefficiency for long time series (Part B). I've written a blog post detailing how to use Keras for time series predictions here: https://ahstat.github.io/RNN-Keras-time-series/ .

\n\n

Complete self-contained code

\n\n
################\n# Code from OP #\n################\nimport numpy as np\ndef random_sample(len_timeseries=3000):\n    Nchoice = 600\n    x1 = np.cos(np.arange(0,len_timeseries)/float(1.0 + np.random.choice(Nchoice)))\n    x2 = np.cos(np.arange(0,len_timeseries)/float(1.0 + np.random.choice(Nchoice)))\n    x3 = np.sin(np.arange(0,len_timeseries)/float(1.0 + np.random.choice(Nchoice)))\n    x4 = np.sin(np.arange(0,len_timeseries)/float(1.0 + np.random.choice(Nchoice)))\n    y1 = np.random.random(len_timeseries)\n    y2 = np.random.random(len_timeseries)\n    y3 = np.random.random(len_timeseries)\n    for t in range(3,len_timeseries):\n        ## the output time series depend on input as follows: \n        y1[t] = x1[t-2] \n        y2[t] = x2[t-1]*x3[t-2]\n        y3[t] = x4[t-3]\n    y = np.array([y1,y2,y3]).T\n    X = np.array([x1,x2,x3,x4]).T\n    return y, X\ndef generate_data(Nsequence = 1000):\n    X_train = []\n    y_train = []\n    for isequence in range(Nsequence):\n        y, X = random_sample()\n        X_train.append(X)\n        y_train.append(y)\n    return np.array(X_train),np.array(y_train)\n\nNsequence = 100\nprop = 0.5\nNtrain = int(Nsequence*prop)\nX, y = generate_data(Nsequence)\nX_train = X[:Ntrain,:,:]\nX_test  = X[Ntrain:,:,:]\ny_train = y[:Ntrain,:,:]\ny_test  = y[Ntrain:,:,:] \n\n#X.shape = (N sequence, length of time series, N input features)\n#y.shape = (N sequence, length of time series, N targets)\nprint(X.shape, y.shape)\n# (100, 3000, 4) (100, 3000, 3)\n\n####################\n# Cutting function #\n####################\ndef stateful_cut(arr, batch_size, T_after_cut):\n    if len(arr.shape) != 3:\n        # N: Independent sample size,\n        # T: Time length,\n        # m: Dimension\n        print(\"ERROR: please format arr as a (N, T, m) array.\")\n\n    N = arr.shape[0]\n    T = arr.shape[1]\n\n    # We need T_after_cut * nb_cuts = T\n    nb_cuts = int(T / T_after_cut)\n    if nb_cuts * T_after_cut != T:\n        print(\"ERROR: T_after_cut must divide T\")\n\n    # We need batch_size * nb_reset = N\n    # If nb_reset = 1, we only reset after the whole epoch, so no need to reset\n    nb_reset = int(N / batch_size)\n    if nb_reset * batch_size != N:\n        print(\"ERROR: batch_size must divide N\")\n\n    # Cutting (technical)\n    cut1 = np.split(arr, nb_reset, axis=0)\n    cut2 = [np.split(x, nb_cuts, axis=1) for x in cut1]\n    cut3 = [np.concatenate(x) for x in cut2]\n    cut4 = np.concatenate(cut3)\n    return(cut4)\n\n#############\n# Main code #\n#############\nfrom keras.models import Sequential\nfrom keras.layers import Dense, LSTM, TimeDistributed\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpatches\n\n##\n# Data\n##\nN = X_train.shape[0] # size of samples\nT = X_train.shape[1] # length of each time series\nbatch_size = N # number of time series considered together: batch_size | N\nT_after_cut = 100 # length of each cut part of the time series: T_after_cut | T\ndim_in = X_train.shape[2] # dimension of input time series\ndim_out = y_train.shape[2] # dimension of output time series\n\ninputs, outputs, inputs_test, outputs_test = \\\n  [stateful_cut(arr, batch_size, T_after_cut) for arr in \\\n  [X_train, y_train, X_test, y_test]]\n\n##\n# Model\n##\nnb_units = 10\n\nmodel = Sequential()\nmodel.add(LSTM(batch_input_shape=(batch_size, None, dim_in),\n               return_sequences=True, units=nb_units, stateful=True))\nmodel.add(TimeDistributed(Dense(activation='linear', units=dim_out)))\nmodel.compile(loss = 'mse', optimizer = 'rmsprop')\n\n##\n# Training\n##\nepochs = 100\n\nnb_reset = int(N / batch_size)\nif nb_reset > 1:\n    print(\"ERROR: We need to reset states when batch_size < N\")\n\n# When nb_reset = 1, we do not need to reinitialize states\nhistory = model.fit(inputs, outputs, epochs = epochs, \n                    batch_size = batch_size, shuffle=False,\n                    validation_data=(inputs_test, outputs_test))\n\ndef plotting(history):\n    plt.plot(history.history['loss'], color = \"red\")\n    plt.plot(history.history['val_loss'], color = \"blue\")\n    red_patch = mpatches.Patch(color='red', label='Training')\n    blue_patch = mpatches.Patch(color='blue', label='Test')\n    plt.legend(handles=[red_patch, blue_patch])\n    plt.xlabel('Epochs')\n    plt.ylabel('MSE loss')\n    plt.show()\n\nplt.figure(figsize=(10,8))\nplotting(history) # Evolution of training/test loss\n\n##\n# Visual checking for a time series\n##\n## Mime model which is stateless but containing stateful weights\nmodel_stateless = Sequential()\nmodel_stateless.add(LSTM(input_shape=(None, dim_in),\n               return_sequences=True, units=nb_units))\nmodel_stateless.add(TimeDistributed(Dense(activation='linear', units=dim_out)))\nmodel_stateless.compile(loss = 'mse', optimizer = 'rmsprop')\nmodel_stateless.set_weights(model.get_weights())\n\n## Prediction of a new set\ni = 0 # time series selected (between 0 and N-1)\nx = X_train[i]\ny = y_train[i]\ny_hat = model_stateless.predict(np.array([x]))[0]\n\nfor dim in range(3): # dim = 0 for y1 ; dim = 1 for y2 ; dim = 2 for y3.\n    plt.figure(figsize=(10,8))\n    plt.plot(range(T), y[:,dim])\n    plt.plot(range(T), y_hat[:,dim])\n    plt.show()\n\n## Conclusion: works almost perfectly.\n
\n", + "system": "" + }, + { + "instruction": "Using Keras ImageDataGenerator in a regression model", + "input": "", + "output": "

With Keras 2.2.4 you can use flow_from_dataframe which solves what you want to do, allowing you to flow images from a directory for regression problems. You should store all your images in a folder and load a dataframe containing in one column the image IDs and in the other column the regression score (labels) and set class_mode='other' in flow_from_dataframe.

\n

Here you can find an example where the images are in image_dir, the dataframe with the image IDs and the regression scores is loaded with pandas from the "train file"

\n
train_label_df = pd.read_csv(train_file, delimiter=' ', header=None, names=['id', 'score'])\n\ntrain_datagen = ImageDataGenerator(rescale = 1./255, horizontal_flip = True,\n                                   fill_mode = "nearest", zoom_range = 0.2,\n                                   width_shift_range = 0.2, height_shift_range=0.2,\n                                   rotation_range=30) \n\ntrain_generator = train_datagen.flow_from_dataframe(dataframe=train_label_df, directory=image_dir, \n                                              x_col="id", y_col="score", has_ext=True, \n                                              class_mode="other", target_size=(img_width, img_height), \n                                              batch_size=bs)\n
\n", + "system": "" + }, + { + "instruction": "Policy Gradients in Keras", + "input": "", + "output": "

TL;DR

\n\n
    \n
  1. Learn how to implement custom loss functions and gradients using Keras.backend. You will need it for more advanced algorithms and it's actually much easier once you get the hang of it
  2. \n
  3. One CartPole example of using keras.backend could be https://gist.github.com/kkweon/c8d1caabaf7b43317bc8825c226045d2 (though its backend used Tensorflow but it should be very similar if not the same)
  4. \n
\n\n

Problem

\n\n

When playing,

\n\n

the agent needs a policy that is basically a function that maps a state into a policy that is a probability for each action. So, the agent will choose an action according to its policy.

\n\n

i.e, policy = f(state)

\n\n

When training,

\n\n

Policy Gradient does not have a loss function. Instead, it tries to maximize the expected return of rewards. And, we need to compute the gradients of log(action_prob) * advantage

\n\n
    \n
  1. advantage is a function of rewards. \n\n
  2. \n
  3. action_prob is a function of states and action_taken. For example, we need to know which action we took so that we can update parameters to increase/decrease a probability for the action we took.\n\n
  4. \n
\n\n

I'm assuming something like this

\n\n\n\n

Summary

\n\n

We need two functions

\n\n\n\n

You already know it's not easy to implement like typical classification problems where you can just model.compile(...) -> model.fit(X, y)

\n\n

However,

\n\n\n\n

Plus, there are many many kinds of policy gradients.

\n\n\n\n

References

\n\n\n", + "system": "" + }, + { + "instruction": "How to create a Image Dataset just like MNIST dataset?", + "input": "", + "output": "

You can either write a function that loads all your images and stack them into a numpy array if all fits in RAM or use Keras ImageDataGenerator (https://keras.io/preprocessing/image/) which includes a function flow_from_directory. You can find an example here https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d.

\n", + "system": "" + }, + { + "instruction": "Cannot import name 'tf_utils' when using importing keras", + "input": "", + "output": "

Seems like it was a problem with keras 2.3.0, I installed keras 2.1.5 using pip and it works fine.

\n", + "system": "" + }, + { + "instruction": "How to set parameters in keras to be non-trainable?", + "input": "", + "output": "

You can simple assign a boolean value to the layer property trainable.

\n\n
model.layers[n].trainable = False\n
\n\n

You can visualize which layer is trainable:

\n\n
for l in model.layers:\n    print(l.name, l.trainable)\n
\n\n

You can pass it by the model definition too:

\n\n
frozen_layer = Dense(32, trainable=False)\n
\n\n

From Keras documentation:

\n\n
\n

To \"freeze\" a layer means to exclude it from training, i.e. its\n weights will never be updated. This is useful in the context of\n fine-tuning a model, or using fixed embeddings for a text input.
\n You can pass a trainable argument (boolean) to a layer constructor to\n set a layer to be non-trainable.\n Additionally, you can set the trainable property of a layer to True or\n False after instantiation. For this to take effect, you will need to\n call compile() on your model after modifying the trainable property.

\n
\n", + "system": "" + }, + { + "instruction": "ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [8, 28, 28]", + "input": "", + "output": "

The input layers of the model you created needs a 4 dimension tensor to work with but the x_train tensor you are passing to it has only 3 dimensions

\n

This means that you have to reshape your training set with .reshape(n_images, 286, 384, 1). Now you have added an extra dimension without changing the data and your model is ready to run.

\n

you need to reshape your x_train tensor to a 4 dimension before training your model.\nfor example:

\n
x_train = x_train.reshape(-1, 28, 28, 1)\n
\n

for more info on keras inputs Check this answer

\n", + "system": "" + }, + { + "instruction": "What does `training=True` mean when calling a TensorFlow Keras model?", + "input": "", + "output": "

Some neural network layers behave differently during training and inference, for example Dropout and BatchNormalization layers. For example

\n\n\n\n

The training argument lets the layer know which of the two \"paths\" it should take. If you set this incorrectly, your network might not behave as expected.

\n", + "system": "" + }, + { + "instruction": "Difference between model(x) and model.predict(x) in Keras?", + "input": "", + "output": "

Keras with tensorflow backend was using underlying tensorflow objects, but mostly was providing high level outputs which could be understood outside the tensorflow environment (as an example it could output numpy arrays or python lists).
Today given a model in tensorflow 2.0 (built using the keras library),

\n\n
out_np = model.predict(x)\n
\n\n

provides a numpy array which can, as an example, be printed with print(out_np).\n
On the other hand,

\n\n
out_tf = model(x)\n
\n\n

results into a tensorflow object, wich can be converted to a numpy array with .numpy()\n
The two results are equivalent, as an example, we have that the following is True,

\n\n
out_np.max() == out_tf.numpy().max()\n
\n\n

The format may be different, but the meaning of model(x) and model.predict(x) is the same:
given an input x, it is the value of the output nodes of a network characterized by its structure, weights and biases.

\n", + "system": "" + }, + { + "instruction": "Keras Conv2D: filters vs kernel_size", + "input": "", + "output": "

Each convolution layer consists of several convolution channels (aka. depth or filters). In practice, they are a number such as 64, 128, 256, 512 etc. This is equal to number of channels in the output of a convolutional layer. kernel_size, on the other hand, is the size of these convolution filters. In practice, they take values such as 3x3 or 1x1 or 5x5. To abbreviate, they can be written as 1 or 3 or 5 as they are mostly square in practice.

\n\n

Edit

\n\n

Following quote should make it more clear.

\n\n

Discussion on vlfeat

\n\n

Suppose X is an input with size W x H x D x N (where N is the size of the batch) to a convolutional layer containing filter F (with size FW x FH x FD x K) in a network.

\n\n

The number of feature channels D is the third dimension of the input X here (for example, this is typically 3 at the first input to the network if the input consists of colour images).\nThe number of filters K is the fourth dimension of F.\nThe two concepts are closely linked because if the number of filters in a layer is K, it produces an output with K feature channels. So the input to the next layer will have K feature channels.

\n\n

The FW x FH above is filter size you are looking for.

\n\n

Added

\n\n

You should be familiar with filters. You can consider each filter to be responsible for extracting some type of feature from a raw image. The CNNs try to learn such filters i.e. the filters parametrized in CNNs are learned during training of CNNs. You apply each filter in a Conv2D to each input channel and combine these to get output channels. So, the number of filters and the number of output channels are the same.

\n", + "system": "" + }, + { + "instruction": "How to get accuracy of model using keras?", + "input": "", + "output": "

You probably didn't add \"acc\" as a metric when compiling the model.

\n\n
model.compile(optimizer=..., loss=..., metrics=['accuracy',...])\n
\n\n

You can get the metrics and loss from any data without training again with:

\n\n
model.evaluate(X, Y)\n
\n", + "system": "" + }, + { + "instruction": "How to match cv2.imread to the keras image.img_load output", + "input": "", + "output": "

OpenCV reads images in BGR format whereas in keras, it is represented in RGB. To get the OpenCV version to correspond to the order we expect (RGB), simply reverse the channels:

\n\n
test_image = cv2.imread('trick.png')\ntest_image = cv2.resize(test_image, (64, 64))\ntest_image = test_image[...,::-1] # Added\n
\n\n

The last line reverses the channels to be in RGB order. You can then feed this into your keras model.

\n\n

Another point I'd like to add is that cv2.imread usually reads in images in uint8 precision. Examining the output of your keras loaded image, you can see that the data is in floating point precision so you may also want to convert to a floating-point representation, such as float32:

\n\n
import numpy as np\n# ...\n# ...\ntest_image = test_image[...,::-1].astype(np.float32)\n
\n\n

As a final point, depending on how you trained your model it's usually customary to normalize the image pixel values to a [0,1] range. If you did this with your keras model, make sure you divide your values by 255 in your image read in through OpenCV:

\n\n
import numpy as np\n# ...\n# ...\ntest_image = (test_image[...,::-1].astype(np.float32)) / 255.0\n
\n", + "system": "" + }, + { + "instruction": "Keras: How is Accuracy Calculated for Multi-Label Classification?", + "input": "", + "output": "

For multi-label classification, I think it is correct to use sigmoid as the activation and binary_crossentropy as the loss.

\n\n

If the output is sparse multi-label, meaning a few positive labels and a majority are negative labels, the Keras accuracy metric will be overflatted by the correctly predicted negative labels. If I remember correctly, Keras does not choose the label with the highest probability. Instead, for binary classification, the threshold is 50%. So the prediction would be [0, 0, 0, 0, 0, 1]. And if the actual labels were [0, 0, 0, 0, 0, 0], the accuracy would be 5/6. You can test this hypothesis by creating a model that always predicts negative label and look at the accuracy.

\n\n

If that's indeed the case, you may try a different metric such as top_k_categorical_accuracy.

\n\n

Another remote possibility I can think of is your training data. Are the labels y somehow \"leaked\" into x? Just a wild guess.

\n", + "system": "" + }, + { + "instruction": "classification metrics can't handle a mix of continuous-multioutput and multi-label-indicator targets", + "input": "", + "output": "
y_pred = (y_pred > 0.5) \n
\n\n

Outputs a boolean matrix. The problem is that it has the same shape as it had before, but when you evaluate accuracy you need a vector of labels.

\n\n

To do this take np.argmax(y_pred, axis=1) instead to output correct labels.

\n", + "system": "" + }, + { + "instruction": "Keras, append to logs from callback", + "input": "", + "output": "

You can insert your additional metrics into the dictionary logs.

\n\n\n\n
from keras.callbacks import Callback\n\nclass ComputeMetrics(Callback):\n    def on_epoch_end(self, epoch, logs):\n        logs['val_metric'] = epoch ** 2  # replace it with your metrics\n        if (epoch + 1) % 10 == 0:\n            logs['test_metric'] = epoch ** 3  # same\n        else:\n            logs['test_metric'] = np.nan\n
\n\n

Just remember to place this callback before CSVLogger in your fit call. Callbacks that appear later in the list would receive a modified version of logs. For example,

\n\n
model = Sequential([Dense(1, input_shape=(10,))])\nmodel.compile(loss='mse', optimizer='adam')\nmodel.fit(np.random.rand(100, 10),\n          np.random.rand(100),\n          epochs=30,\n          validation_data=(np.random.rand(100, 10), np.random.rand(100)),\n          callbacks=[ComputeMetrics(), CSVLogger('1.log')])\n
\n\n

Now if you take a look at the output log file, you'll see two additional columns test_metric and val_metric:

\n\n
epoch,loss,test_metric,val_loss,val_metric\n0,0.547923130989,nan,0.370979120433,0\n1,0.525437340736,nan,0.35585285902,1\n2,0.501358469725,nan,0.341958616376,4\n3,0.479624577463,nan,0.329370084703,9\n4,0.460121934414,nan,0.317930338383,16\n5,0.440655426979,nan,0.307486981452,25\n6,0.422990380526,nan,0.298160370588,36\n7,0.406809270382,nan,0.289906248748,49\n8,0.3912438941,nan,0.282540213466,64\n9,0.377326357365,729,0.276457450986,81\n10,0.364721306562,nan,0.271435074806,100\n11,0.353612961769,nan,0.266939682364,121\n12,0.343238875866,nan,0.263228923082,144\n13,0.333940329552,nan,0.260326927304,169\n14,0.325931007862,nan,0.25773427248,196\n15,0.317790198028,nan,0.255648627281,225\n16,0.310636150837,nan,0.25411529541,256\n17,0.304091459513,nan,0.252928718328,289\n18,0.298703012466,nan,0.252127869725,324\n19,0.292693507671,6859,0.251701972485,361\n20,0.287824733257,nan,0.251610517502,400\n21,0.283586999774,nan,0.251790778637,441\n22,0.27927801609,nan,0.252100949883,484\n23,0.276239238977,nan,0.252632959485,529\n24,0.273072380424,nan,0.253150621653,576\n25,0.270296501517,nan,0.253555388451,625\n26,0.268056542277,nan,0.254015884399,676\n27,0.266158599854,nan,0.254496408701,729\n28,0.264166412354,nan,0.254723013639,784\n29,0.262506003976,24389,0.255338237286,841\n
\n", + "system": "" + }, + { + "instruction": "Using keras tokenizer for new words not in training set", + "input": "", + "output": "

In Keras Tokenizer you have the oov_token parameter. Just select your token and unknown words will have that one.

\n\n
tokenizer_a = Tokenizer(oov_token=1)\ntokenizer_b = Tokenizer()\ntokenizer_a.fit_on_texts([\"Hello world\"])\ntokenizer_b.fit_on_texts([\"Hello world\"])\n
\n\n

Outputs

\n\n
In [26]: tokenizer_a.texts_to_sequences([\"Hello cruel world\"])\nOut[26]: [[2, 1, 3]]\n\nIn [27]: tokenizer_b.texts_to_sequences([\"Hello cruel world\"])\nOut[27]: [[1, 2]]\n
\n", + "system": "" + }, + { + "instruction": "Predicting a multiple forward time step of a time series using LSTM", + "input": "", + "output": "

Sharing the same concerns about having too little data, you can do that like this.

\n\n

First, it's a good idea to keep your values between -1 and +1, so I'd normalize them first.

\n\n

For the LSTM model, you must make sure you're using return_sequences=True.
\nThere is nothing \"wrong\" with your model, but it may need more or less layers or units to achieve what you desire. (There is no clear answer to this, though).

\n\n

Training the model to predict the next step:

\n\n

All you need is to pass Y as a shifted X:

\n\n
entireData = arrayWithShape((samples,52,1))\nX = entireData[:,:-1,:]\ny = entireData[:,1:,:]\n
\n\n

Train the model using these.

\n\n

Predicting the future:

\n\n

Now, for predicting the future, since we need to use predicted elements as input for more predicted elements, we are going to use a loop and make the model stateful=True.

\n\n

Create a model equal to the previous one, with these changes:

\n\n\n\n

Copy the weights of the previously trained model:

\n\n
newModel.set_weights(oldModel.get_weights())\n
\n\n

Predict only one sample at a time and never forget to call model.reset_states() before starting any sequence.

\n\n

First predict with the sequence you already know (this will make sure the model prepares its states properly for predicting the future)

\n\n
model.reset_states()\npredictions = model.predict(entireData)\n
\n\n

By the way we trained, the last step in predictions will be the first future element:

\n\n
futureElement = predictions[:,-1:,:]\n\nfutureElements = []\nfutureElements.append(futureElement)\n
\n\n

Now we make a loop where this element is the input. (Because of stateful, the model will understand it's a new input step of the previous sequence instead of a new sequence)

\n\n
for i in range(howManyPredictions):\n    futureElement = model.predict(futureElement)\n    futureElements.append(futureElement)\n
\n\n
\n\n

This link contains a complete example predicting the future of two features: https://github.com/danmoller/TestRepo/blob/master/TestBookLSTM.ipynb

\n", + "system": "" + }, + { + "instruction": "Tree-LSTM in Keras", + "input": "", + "output": "

You can implement a tree-LSTM in Keras using the Subclassing API. This will allow you to define your own custom layers and models by subclassing the tf.keras.layers.Layer and tf.keras.Model classes, respectively.

\n

To implement a tree-LSTM in the Subclassing API, you will need to define a custom layer that takes a tree-structured input and applies the LSTM operation to each node in the tree. Here is some pseudocode that outlines the steps you can follow:

\n
class TreeLSTMLayer(tf.keras.layers.Layer):\n  def __init__(self, units, **kwargs):\n    super(TreeLSTMLayer, self).__init__(**kwargs)\n    self.units = units\n\n  def build(self, input_shape):\n    # Define the weight matrices and biases for the LSTM operation\n    # (e.g., self.W_i, self.W_f, self.W_o, self.W_c, self.b_i, etc.)\n    # based on the number of units in the layer\n    # (e.g., input_dim = units, output_dim = units)\n    # and the input shape of the tree (i.e., input_shape[0])\n\n  def call(self, inputs):\n    # Unpack the inputs into the tree structure and the initial states\n    # (e.g., tree, h_0, c_0 = inputs)\n\n    # Initialize a list to store the output states for each node in the tree\n    output_states = []\n\n    # Recursively traverse the tree and apply the LSTM operation\n    # at each node, updating the hidden and cell states as you go\n    # (e.g., h_t, c_t = lstm(x_t, h_t-1, c_t-1))\n    def traverse_tree(node, h_t, c_t):\n      # Apply the LSTM operation to the current node\n      # (e.g., i_t, f_t, o_t, g_t = lstm(x_t, h_t, c_t))\n      # Update the hidden and cell states\n      # (e.g., c_t = f_t * c_t + i_t * g_t, h_t = o_t * tf.tanh(c_t))\n      output_states.append((h_t, c_t))\n      # Recursively traverse the children of the current node\n      for child in node.children:\n        traverse_tree(child, h_t, c_t)\n\n    # Start the recursive traversal at the root of the tree\n    traverse_tree(tree.root, h_0, c_0)\n\n    # Return the output states for each node in the tree\n    return output_states\n
\n

Once you have defined your custom TreeLSTMLayer, you can use it to build a tree-LSTM model by subclassing the tf.keras.Model class and using the TreeLSTMLayer as one of the layers in your model.

\n", + "system": "" + }, + { + "instruction": "Need To Compile Keras Model Before `model.evaluate()`", + "input": "", + "output": "

Because evaluate will calculate the loss function and the metrics.

\n

You don't have any of them until you compile the model. They're parameters to the compile method:

\n
model.compile(optimizer=..., loss=..., metrics=...) \n
\n

On the other hand, predict doesn't evaluate any metric or loss, it just passes the input data through the model and gets its output.

\n

You need the "loss" for training too, so you can't train without compiling. And you can compile a model as many times as you want, and even change the parameters.

\n
\n

The outputs and the loss function:

\n

The model's outputs depend on it being defined with weights. That is automatic and you can predict from any model, even without any training. Every model in Keras is already born with weights (either initialized by you or randomly initialized)

\n

You input something, the model calculates the output. At the end of everything, this is all that matters. A good model has proper weights and outputs things correctly.

\n

But before getting to that end, your model needs to be trained.

\n

Now, the loss function takes the current output and compares it with the expected/true result. It's a function supposed to be minimized. The less the loss, the closer your results are to the expected. This is the function from which the derivatives will be taken so the backpropagation algorithm can update the weights.

\n

The loss function is not useful for the final purpose of the model, but it's necessary for training. That's probably why you can have models without loss functions (and consequently, there is no way to evaluate them).

\n", + "system": "" + }, + { + "instruction": "Explain with example: how embedding layers in keras works", + "input": "", + "output": "

Embedding layer creates embedding vectors out of the input words (I myself still don't understand the math) similarly like word2vec or pre-calculated glove would do.

\n

Before I get to your code, let's make a short example.

\n
texts = ['This is a text', 'This is not a text']\n
\n

First we turn these sentences into a vector of integers where each word is a number assigned to the word in the dictionary and order of the vector creates the sequence of the words.

\n
from keras.preprocessing.text import Tokenizer\nfrom keras.preprocessing.sequence import pad_sequences \nfrom keras.utils import to_categorical\n\nmax_review_length = 6  # maximum length of the sentence\nembedding_vector_length = 3\ntop_words = 10\n\n# num_words is the number of unique words in the sequence, if there's more top count words are taken\ntokenizer = Tokenizer(top_words)\ntokenizer.fit_on_texts(texts)\nsequences = tokenizer.texts_to_sequences(texts)\nword_index = tokenizer.word_index\ninput_dim = len(word_index) + 1\nprint('Found %s unique tokens.' % len(word_index))\n\n# max_review_length is the maximum length of the input text so that we can create vector [... 0,0,1,3,50] where 1,3,50 are individual words\ndata = pad_sequences(sequences, max_review_length)\n\nprint('Shape of data tensor:', data.shape)\nprint(data)\n\n[Out:] \n'This is a text' --> [0 0 1 2 3 4]\n'This is not a text' --> [0 1 2 5 3 4]\n
\n

Now you can input these into the embedding layer.

\n
from keras.models import Sequential\nfrom keras.layers import Embedding\n\nmodel = Sequential()\nmodel.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length, mask_zero=True))\nmodel.compile(optimizer='adam', loss='categorical_crossentropy')\noutput_array = model.predict(data)\n
\n

output_array contains array of size (2, 6, 3): 2 input reviews or sentences in my case, 6 is the maximum number of words in each review (max_review_length) and 3 is embedding_vector_length.\nE.g.

\n
array([[[-0.01494285, -0.007915  ,  0.01764857],\n    [-0.01494285, -0.007915  ,  0.01764857],\n    [-0.03019481, -0.02910612,  0.03518577],\n    [-0.0046863 ,  0.04763055, -0.02629668],\n    [ 0.02297204,  0.02146662,  0.03114786],\n    [ 0.01634104,  0.02296363, -0.02348827]],\n\n   [[-0.01494285, -0.007915  ,  0.01764857],\n    [-0.03019481, -0.02910612,  0.03518577],\n    [-0.0046863 ,  0.04763055, -0.02629668],\n    [-0.01736645, -0.03719328,  0.02757809],\n    [ 0.02297204,  0.02146662,  0.03114786],\n    [ 0.01634104,  0.02296363, -0.02348827]]], dtype=float32)\n
\n

In your case you have a list of 5000 words, which can create review of maximum 500 words (more will be trimmed) and turn each of these 500 words into vector of size 32.

\n

You can get mapping between the word indexes and embedding vectors by running:

\n
model.layers[0].get_weights()\n
\n

In the case below top_words was 10, so we have mapping of 10 words and you can see that mapping for 0, 1, 2, 3, 4 and 5 is equal to output_array above.

\n
[array([[-0.01494285, -0.007915  ,  0.01764857],\n    [-0.03019481, -0.02910612,  0.03518577],\n    [-0.0046863 ,  0.04763055, -0.02629668],\n    [ 0.02297204,  0.02146662,  0.03114786],\n    [ 0.01634104,  0.02296363, -0.02348827],\n    [-0.01736645, -0.03719328,  0.02757809],\n    [ 0.0100757 , -0.03956784,  0.03794377],\n    [-0.02672029, -0.00879055, -0.039394  ],\n    [-0.00949502, -0.02805768, -0.04179233],\n    [ 0.0180716 ,  0.03622523,  0.02232374]], dtype=float32)]\n
\n

As mentioned in: https://stats.stackexchange.com/questions/270546/how-does-keras-embedding-layer-work these vectors are initiated as random and optimized by the network optimizers just like any other parameter of the network.

\n", + "system": "" + }, + { + "instruction": "Can someone explain to me the difference between activation and recurrent activation arguments passed in initialising keras lstm layer?", + "input": "", + "output": "

On code

\n\n

Line from 1932

\n\n
i = self.recurrent_activation(z0)\nf = self.recurrent_activation(z1)\nc = f * c_tm1 + i * self.activation(z2)\no = self.recurrent_activation(z3)\nh = o * self.activation(c)\n
\n\n

recurrent_activation is for activate input/forget/output gate.

\n\n

activation if for cell state and hidden state.

\n", + "system": "" + }, + { + "instruction": "Keras: use Tensorboard with train_on_batch()", + "input": "", + "output": "

A possible way to create the TensorBoard callback, and drive it manually:

\n\n
# This example shows how to use keras TensorBoard callback\n# with model.train_on_batch\n\nimport tensorflow.keras as keras\n\n# Setup the model\nmodel = keras.models.Sequential()\nmodel.add(...) # Add your layers\nmodel.compile(...) # Compile as usual\n\nbatch_size=256\n\n# Create the TensorBoard callback,\n# which we will drive manually\ntensorboard = keras.callbacks.TensorBoard(\n  log_dir='/tmp/my_tf_logs',\n  histogram_freq=0,\n  batch_size=batch_size,\n  write_graph=True,\n  write_grads=True\n)\ntensorboard.set_model(model)\n\n# Transform train_on_batch return value\n# to dict expected by on_batch_end callback\ndef named_logs(model, logs):\n  result = {}\n  for l in zip(model.metrics_names, logs):\n    result[l[0]] = l[1]\n  return result\n\n# Run training batches, notify tensorboard at the end of each epoch\nfor batch_id in range(1000):\n  x_train,y_train = create_training_data(batch_size)\n  logs = model.train_on_batch(x_train, y_train)\n  tensorboard.on_epoch_end(batch_id, named_logs(model, logs))\n\ntensorboard.on_train_end(None)\n
\n", + "system": "" + }, + { + "instruction": "How can I use the Keras OCR example?", + "input": "", + "output": "

Well, I will try to answer everything you asked here:

\n\n

As commented in the OCR code, Keras doesn't support losses with multiple parameters, so it calculated the NN loss in a lambda layer. What does this mean in this case?

\n\n

The neural network may look confusing because it is using 4 inputs ([input_data, labels, input_length, label_length]) and loss_out as output. Besides input_data, everything else is information used only for calculating the loss, it means it is only used for training. We desire something like in line 468 of the original code:

\n\n
Model(inputs=input_data, outputs=y_pred).summary()\n
\n\n

which means \"I have an image as input, please tell me what is written here\". So how to achieve it?

\n\n

1) Keep the original training code as it is, do the training normally;

\n\n

2) After training, save this model Model(inputs=input_data, outputs=y_pred)in a .h5 file to be loaded wherever you want;

\n\n

3) Do the prediction: if you take a look at the code, the input image is inverted and translated, so you can use this code to make it easy:

\n\n
from scipy.misc import imread, imresize\n#use width and height from your neural network here.\n\ndef load_for_nn(img_file):\n    image = imread(img_file, flatten=True)\n    image = imresize(image,(height, width))\n    image = image.T\n\n    images = np.ones((1,width,height)) #change 1 to any number of images you want to predict, here I just want to predict one\n    images[0] = image\n    images = images[:,:,:,np.newaxis]\n    images /= 255\n\n    return images\n
\n\n

With the image loaded, let's do the prediction:

\n\n
def predict_image(image_path): #insert the path of your image \n    image = load_for_nn(image_path) #load from the snippet code\n    raw_word = model.predict(image) #do the prediction with the neural network\n    final_word = decode_output(raw_word)[0] #the output of our neural network is only numbers. Use decode_output from image_ocr.py to get the desirable string.\n    return final_word\n
\n\n

This should be enough. From my experience, the images used in the training are not good enough to make good predictions, I will release a code using other datasets that improved my results later if necessary.

\n\n

Answering related questions:

\n\n\n\n

It is a technique used to improve sequence classification. The original paper proves it improves results on discovering what is said in audio. In this case it is a sequence of characters. The explanation is a bit trick but you can find a good one here.

\n\n\n\n

I am not sure but you could take a look at Attention mechanism in neural networks. I don't have any good link now but I know it could be the case.

\n\n\n\n

OpenCV implements Maximally Stable Extremal Regions (known as MSER). I really like the results of this algorithm, it is fast and was good enough for me when I needed.

\n\n

As I said before, I will release a code soon. I will edit the question with the repository when I do, but I believe the information here is enough to get the example running.

\n", + "system": "" + }, + { + "instruction": "R keras package Error: Python module tensorflow.contrib.keras.python.keras was not found", + "input": "", + "output": "

I had a similar problem. Restart rstudio, load keras and tensorflow libraries, and type use_condaenv(\"r-tensorflow\"). That fixed it for me.

\n", + "system": "" + }, + { + "instruction": "How to make Keras use Tensorflow backend in Anaconda?", + "input": "", + "output": "

This happens because the keras conda-forge package puts a file in ${CONDA_PREFIX}/etc/conda/activate.d/keras_activate.sh, which sets the environment variable KERAS_BACKEND

\n\n
(root) [root@starlabs ~]# cat $CONDA_PREFIX/etc/conda/activate.d/keras_activate.sh\n#!/bin/bash\nif [ \"$(uname)\" == \"Darwin\" ]\nthen\n    # for Mac OSX\n    export KERAS_BACKEND=tensorflow\nelif [ \"$(uname)\" == \"Linux\" ]\nthen\n    # for Linux\n    export KERAS_BACKEND=theano\nfi\n
\n\n

As you can see from the file, in Linux, it sets the value to 'theano' and according to the official docs:

\n\n
\n

the environment variable KERAS_BACKEND will override what is\n defined in your config file

\n
\n\n

To work around this, you can either edit this file and change 'theano' to 'tensorflow' (which would probably get overwritten on reinstall or on changing environments) or, do the following:

\n\n
export KERAS_BACKEND=tensorflow\npython /path/to/python/program.py\n
\n", + "system": "" + }, + { + "instruction": "Running Keras model for prediction in multiple threads", + "input": "", + "output": "

multi threading in python doesn't necessarily make a better use of your resources since python uses global interpreter lock and only one native thread can run at a time.

\n

in python, usually you should use multi processing to utilize your resources, but since we're talking about keras models, I'm not sure even that is the right thing to do.\nloading several models in several processes has its own overhead, and you could simply increase the batch size as others have already pointed out.

\n

OR if you have a heavy pre-processing stage you could preprocess your data in one process and predict them in another (although I doubt that would be necessary either).

\n", + "system": "" + }, + { + "instruction": "How to manually specify class labels in keras flow_from_directory?", + "input": "", + "output": "

You could simply use the flow_from_directory and extend it to a multiclass in a following manner:

\n\n
def multiclass_flow_from_directory(flow_from_directory_gen, multiclasses_getter):\n    for x, y in flow_from_directory_gen:\n        yield x, multiclasses_getter(x, y)\n
\n\n

Where multiclasses_getter is assigning a multiclass vector / your multiclass representation to your images. Note that x and y are not a single examples but batches of examples, so this should be included in your multiclasses_getter design.

\n", + "system": "" + }, + { + "instruction": "How to map a function with additional parameter using the new Dataset api in TF1.3?", + "input": "", + "output": "

Here is an example using a lambda expression to wrap the function to which we want to pass an argument:

\n\n
import tensorflow as tf\ndef fun(x, arg):\n    return x * arg\n\nmy_arg = tf.constant(2, dtype=tf.int64)\nds = tf.data.Dataset.range(5)\nds = ds.map(lambda x: fun(x, my_arg))\n
\n\n

In the above, the signature of the function provided to map must match the contents of our dataset. So we have to write our lambda expression to match that. Here it is simple, as there is only one element contained in the dataset, the x that contains elements in the range from 0 to 4.

\n\n

If necessary, you can pass in an arbitrary number of external arguments from outside the dataset: ds = ds.map(lambda x: my_other_fun(x, arg1, arg2, arg3), and so on.

\n\n

To verify that the above works, we can observe that the mapping indeed multiplies each dataset element by two:

\n\n
iterator = ds.make_initializable_iterator()\nnext_x = iterator.get_next()\nwith tf.Session() as sess:\n    sess.run(iterator.initializer)\n\n    while True:\n      try:\n        print(sess.run(next_x))\n      except tf.errors.OutOfRangeError:\n        break\n
\n\n

The output:

\n\n
0\n2\n4\n6\n8\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow: Can't understand ctc_beam_search_decoder() output sequence", + "input": "", + "output": "

As indicated in tf.nn.ctc_beam_search_decoder documentation, the shape of the output is not [batch_size, max_sequence_len]. Instead, it is

\n\n
[batch_size, max_decoded_length[j]]\n
\n\n

(with j=0 in your case).

\n\n

Based on the beginning of section 2 of this paper (which is cited in the github repository), max_decoded_length[0] is bounded from above by max_sequence_len, but they are not necessarily equal. The relevant citation is:

\n\n
\n

Let S be a set of training examples drawn from a fixed distribution\n D_{XxZ}. The input space X = (R^m) is the set of all sequences of m\n dimensional real valued vectors. The target space Z = L* is the set of\n all sequences over the (finite) alphabet L of labels. In general, we\n refer to elements of L* as label sequences or labellings. Each example\n in S consists of a pair of sequences (x, z). The target sequence z =\n (z1, z2, ..., zU) is at most as long as the input sequence x = (x1,\n x2, ..., xT ), i.e. U<=T. Since the input and target sequences are\n not generally the same length, there is no a priori way of aligning\n them.

\n
\n\n

In fact, max_decoded_length[0] depends on the specific matrix softmax_outputs. In particular, two such matrices with exactly the same dimensions can result in different max_decoded_length[0].

\n\n

For example, if you replace the row

\n\n
softmax_outputs = np.array([[[0.1, 0.1, 0.8], [0.8, 0.1, 0.1], [0.8, 0.1, 0.1], [0.8, 0.1, 0.1], [0.8, 0.1, 0.1]],\n                                [[0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7]],\n                                [[0.1, 0.7, 0.2], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7]],\n                                [[0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7]]])\n
\n\n

with the rows

\n\n
np.random.seed(7)\nr=np.random.randint(0,100,size=(4,5,3))\nsoftmax_outputs=r/np.sum(r,2).reshape(4,5,1)\n
\n\n

you'll get the output

\n\n
[[1 0 1]\n [1 0 1]\n [1 0 0]\n [1 0 0]]\n
\n\n

(in the above examples, softmax_outputs consists of logits and it is exactly of the same dimensions as the matrix you provided).

\n\n

On the other hand, changing the seed to np.random.seed(50) gives the output

\n\n
[[1 0]\n [1 0]\n [1 0]\n [0 1]]\n
\n\n

P.S.

\n\n

Regarding the last part of your question:

\n\n
\n

In this case I would expect the output to be similar to:

\n\n
[[2, 0, 0, 0, 0],\n [2, 2, 2, 2, 2],\n [1, 2, 2, 2, 2],\n [2, 2, 2, 2, 2]]\n
\n
\n\n

Note that, based on the documentation, num_classes actually represents num_labels + 1. Specifically:

\n\n
\n

The inputs Tensor's innermost dimension size, num_classes, represents\n num_labels + 1 classes, where num_labels is the number of true labels,\n and the largest value (num_classes - 1) is reserved for the blank\n label.

\n \n

For example, for a vocabulary containing 3 labels [a, b, c],\n num_classes = 4 and the labels indexing is {a: 0, b: 1, c: 2, blank:\n 3}.

\n
\n\n

So the true labels in your case are 0 and 1, and 2 is reserved for the blank label. The blank label represents the situation of observing no label (section 3.1 here):

\n\n
\n

A CTC network has a softmax output layer (Bridle, 1990) with one more\n unit than there are labels in L. The activations of the first |L|\n units are interpreted as the probabilities of observing the\n corresponding labels at particular times. The activation of the extra\n unit is the probability of observing a \u2018blank\u2019, or no label. Together,\n these outputs define the probabilities of all possible ways of\n aligning all possible label sequences with the input sequence.

\n
\n", + "system": "" + }, + { + "instruction": "How does TensorFlow's MultiRnnCell work?", + "input": "", + "output": "

Study this blog post as well as the provided implementation. It describes in detail how use MultiRNNCell to stack multiple RNN cells.

\n\n

\"enter

\n", + "system": "" + }, + { + "instruction": "ImportError: No module named datasets", + "input": "", + "output": "
pip install datasets\n
\n

I solved it this way.

\n", + "system": "" + }, + { + "instruction": "numpy random choice in Tensorflow", + "input": "", + "output": "

No, but you can achieve the same result using tf.multinomial:

\n\n
elems = tf.convert_to_tensor([1,2,3,5])\nsamples = tf.multinomial(tf.log([[1, 0, 0.3, 0.6]]), 1) # note log-prob\nelems[tf.cast(samples[0][0], tf.int32)].eval()\nOut: 1\nelems[tf.cast(samples[0][0], tf.int32)].eval()\nOut: 5\n
\n\n

The [0][0] part is here, as multinomial expects a row of unnormalized log-probabilities for each element of the batch and also has another dimension for the number of samples.

\n", + "system": "" + }, + { + "instruction": "Installing TensorFlow on Windows (Python 3.6.x)", + "input": "", + "output": "

Update 15.11.2017

\n\n

It seems that by now it is working like one would expect. Running the following commands using the following pip and python version should work.

\n\n
\n\n

Installing with Python 3.6.x

\n\n
\n\n

Version

\n\n
\n

Python: 3.6.3
\n pip: 9.0.1

\n
\n\n
\n\n

Installation Commands

\n\n

The following commands are based of the following installation guide here.

\n\n

using cmd

\n\n
C:> pip3 install --upgrade tensorflow // cpu\nC:> pip3 install --upgrade tensorflow-gpu // gpu\n
\n\n

using Anaconda

\n\n
C:> conda create -n tensorflow python=3.5 \nC:> activate tensorflow\n(tensorflow)C:> pip install --ignore-installed --upgrade tensorflow\n(tensorflow)C:> pip install --ignore-installed --upgrade tensorflow-gpu \n
\n\n

Additional Information

\nA list of common installation problems can be found here.

\n\n

You can find an example console output of a successful tensorflow cpu installation here.

\n\n
\n\n

Old response:

\n\n

Okay to conclude; use version 3.5.2 !
\nNeither 3.5.1 nor 3.6.x seem to work at the moment.

\n\n

Versions:

\n\n
\n

Python 3.5.2 pip 8.1.1 .. (python 3.5)

\n
\n\n

Commands:

\n\n
// cpu\nC:> pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl\n\n// gpu\nC:> pip install --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-0.12.0rc0-cp35-cp35m-win_amd64.whl\n
\n\n

\n", + "system": "" + }, + { + "instruction": "Install Cuda without root", + "input": "", + "output": "

Update The installation UI for 10.1 changed. The following works:

\n\n\n\n
\n\n

Thank you very much for the hints in the question! I just want to complete it with an approach that worked for me, also inspired in this gist and that hopefully helps in situations where a valid driver is installed, and installing a more recent CUDA on Linux without root permissions is still needed.

\n\n

TL;DR: Here are the steps to install CUDA9+CUDNN7 on Debian, and installing a pre-compiled version of TensorFlow1.4 on Python2.7 to test that everything works. Everything without root privileges and via terminal. Should also work for other CUDA, CUDNN, TensorFlow and Python versions on other Linux systems too.

\n\n
\n\n

INSTALLATION

\n\n
    \n
  1. Go to NVIDIA's official release web for CUDA (as for Nov. 2017, CUDA9 is out): https://developer.nvidia.com/cuda-downloads.

  2. \n
  3. Under your Linux distro, select the runfile (local)option. Note that the sudo indication present in the installation instructions is deceiving, since it is possible to run this installer without root permissions. On a server, one easy way is to copy the <LINK> of the Download button and, in any location of your home directory, run wget <LINK>. It will download the <INSTALLER> file.

  4. \n
  5. Run chmod +x <INSTALLER> to make it executable, and execute it ./<INSTALLER>.

  6. \n
  7. accept the EULA, say no to driver installation, and enter a <CUDA> location under your home directory to install the toolkit and a <CUDASAMPLES> for the samples.

  8. \n
  9. Not asked here but recommended: Download a compatible CUDNN file from the official web (you need to sign in). In my case, I downloaded the cudnn-9.0-linux-x64-v7.tgz, compatible with CUDA9 into the <CUDNN> folder. Uncompress it: tar -xzvf ....

  10. \n
  11. Optional: compile the samples. cd <CUDASAMPLES> && make. There are some very nice examples there and a very good starting point to write some CUDA scripts of yourself.

  12. \n
  13. (If you did 5.): Copy the required files from <CUDNN> into <CUDA>, and grant reading permission to user (not sure if needed):

  14. \n
\n\n
cp -P <CUDNN>/cuda/include/cudnn.h <CUDA>/include/\ncp -P <CUDNN>/cuda/lib64/libcudnn* <CUDA>/lib64\nchmod a+r <CUDA>/include/cudnn.h <CUDA>/lib64/libcudnn*\n
\n\n
    \n
  1. Add the library to your environment. This is typically done adding this following two lines to your ~/.bashrc file (in this example, the <CUDA> directory was ~/cuda9/:
  2. \n
\n\n
export PATH=<CUDA>/bin:$PATH\nexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<CUDA>/lib64/\n
\n\n
\n\n

FOR QUICK TESTING OR TENSORFLOW USERS

\n\n

The quickest way to get a TensorFlow compatible with CUDA9 and CUDNN7 (and a very quick way to test this) is to download a precompiled wheel file and install it with pip install <WHEEL>. Most of the versions you need, can be found in mind's repo (thanks a lot guys). A minimal test that confirms that CUDNN is also working involves the use of tf.nn.conv2d:

\n\n
import tensorflow as tf\nx = tf.nn.conv2d(tf.ones([1,1,10,1]), tf.ones([1,5,1,1]), strides=[1, 1, 1, 1], padding='SAME')\nwith tf.Session() as sess:\n    sess.run(x) # this should output a tensor of shape (1,1,10,1) with [3,4,5,5,5,5,5,5,4,3]\n
\n\n

In my case, the wheel I installed required Intel's MKL library, as explained here. Again, from terminal and without root users, this are the steps I followed to install the library and make TensorFlow find it (reference):

\n\n
    \n
  1. git clone https://github.com/01org/mkl-dnn.git
  2. \n
  3. cd mkl-dnn/scripts && ./prepare_mkl.sh && cd ..
  4. \n
  5. mkdir -p build && cd build
  6. \n
  7. cmake -D CMAKE_INSTALL_PREFIX:PATH=<TARGET_DIR_IN_HOME> ..
  8. \n
  9. make # this takes a while\n\n
      \n
    1. make doc # do this optionally if you have doxygen
    2. \n
  10. \n
  11. make test # also takes a while
  12. \n
  13. make install # installs into <TARGET_DIR_IN_HOME>
  14. \n
  15. add the following to your ~/.bashrc: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<TARGET_DIR_IN_HOME>/lib
  16. \n
\n\n
\n\n

Hope this helps!
\nAndres

\n", + "system": "" + }, + { + "instruction": "TensorFlow: Remember LSTM state for next batch (stateful LSTM)", + "input": "", + "output": "

I found out it was easiest to save the whole state for all layers in a placeholder.

\n\n
init_state = np.zeros((num_layers, 2, batch_size, state_size))\n\n...\n\nstate_placeholder = tf.placeholder(tf.float32, [num_layers, 2, batch_size, state_size])\n
\n\n

Then unpack it and create a tuple of LSTMStateTuples before using the native tensorflow RNN Api.

\n\n
l = tf.unpack(state_placeholder, axis=0)\nrnn_tuple_state = tuple(\n[tf.nn.rnn_cell.LSTMStateTuple(l[idx][0], l[idx][1])\n for idx in range(num_layers)]\n)\n
\n\n

RNN passes in the API:

\n\n
cell = tf.nn.rnn_cell.LSTMCell(state_size, state_is_tuple=True)\ncell = tf.nn.rnn_cell.MultiRNNCell([cell]*num_layers, state_is_tuple=True)\noutputs, state = tf.nn.dynamic_rnn(cell, x_input_batch, initial_state=rnn_tuple_state)\n
\n\n

The state - variable will then be feeded to the next batch as a placeholder.

\n", + "system": "" + }, + { + "instruction": "How does one initialize a variable with tf.get_variable and a numpy value in TensorFlow?", + "input": "", + "output": "

The following works, if you convert the constant NumPy array into a constant Tensor:

\n\n
init = tf.constant(np.random.rand(1, 2))\ntf.get_variable('var_name', initializer=init)\n
\n\n

The documentation for get_variable is a little lacking indeed. Just for your reference, the initializer argument has to be either a TensorFlow Tensor object (which can be constructed by calling tf.constant on a numpy value in your case), or a 'callable' that takes two arguments, shape and dtype, the shape and data type of the value that it's supposed to return. Again, in your case, you can write the following in case you wanted to use the 'callable' mechanism:

\n\n
init = lambda shape, dtype: np.random.rand(*shape)\ntf.get_variable('var_name', initializer=init, shape=[1, 2])\n
\n", + "system": "" + }, + { + "instruction": "Training on imbalanced data using TensorFlow", + "input": "", + "output": "

(1)It's ok to use your strategy. I'm working with imbalanced data as well, which I try to use down-sampling and up-sampling methods first to make the training set even distributed. Or using ensemble method to train each classifier with an even distributed subset.

\n\n

(2)I haven't seen any method to maximise the AUROC. My thought is that AUROC is based on true positive and false positive rate, which doesn't tell how well it works on each instance. Thus, it may not necessarily maximise the capability to separate the classes.

\n\n

(3)Regarding weighting the cost by the ratio of class instances, it similar to Loss function for class imbalanced binary classifier in Tensor flow\nand the answer.

\n", + "system": "" + }, + { + "instruction": "Could not load dynamic library 'libnvinfer.so.7'", + "input": "", + "output": "

For me the setting a symbolic link from libnvinfer version 7 to 8 worked:

\n
# the following path will be different for you - depending on your install method\n$ cd env/lib/python3.10/site-packages/tensorrt\n\n# create symbolic links\n$ ln -s libnvinfer_plugin.so.8 libnvinfer_plugin.so.7\n$ ln -s libnvinfer.so.8 libnvinfer.so.7\n\n# add tensorrt to library path\n$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/env/lib/python3.10/site-packages/tensorrt/\n
\n", + "system": "" + }, + { + "instruction": "NotImplementedError: Cannot convert a symbolic Tensor (lstm_2/strided_slice:0) to a numpy array. T", + "input": "", + "output": "

I solved with numpy downgrade to 1.18.5

\n
pip install -U numpy==1.18.5\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow 2.1.0: has no attribute 'random_normal'", + "input": "", + "output": "

It was moved to tf.random.normal (along with all the other tf.random_* functions)

\n", + "system": "" + }, + { + "instruction": "Cannot run tflite model on GPU (Jetson Nano) using Python", + "input": "", + "output": "

TFLite doesn't support Nvidia GPUs as per this link

\n", + "system": "" + }, + { + "instruction": "How to handle non-determinism when training on a GPU?", + "input": "", + "output": "

TL;DR

\n\n

That, but much longer

\n

When you see neural network operations as mathematical operations, you would expect everything to be deterministic. Convolutions, activations, cross-entropy \u2013 everything here are mathematical equations and should be deterministic. Even pseudo-random operations such as shuffling, drop-out, noise and the likes, are entirely determined by a seed.

\n

When you see those operations from their computational implementation, on the other hand, you see them as massively parallelized computations, which can be source of randomness unless you are very careful.

\n

The heart of the problem is that, when you run operations on several parallel threads, you typically do not know which thread will end first. It is not important when threads operate on their own data, so for example, applying an activation function to a tensor should be deterministic. But when those threads need to synchronize, such as when you compute a sum, then the result may depend on the order of the summation, and in turn, on the order in which thread ended first.

\n

From there, you have broadly speaking two options:

\n\n

Which route takes CuDNN? Well, mostly the deterministic one. In recent releases, deterministic operations are the norm rather than the exception. But it used to offer many non-deterministic operations, and more importantly, it used to not offer some operations such as reduction, that people needed to implement themselves in CUDA with a variable degree of consideration to determinism.

\n

Some libraries such as theano were more ahead of this topic, by exposing early on a deterministic flag that the user could turn on or off \u2013 but as you can see from its description, it is far from offering any guarantee.

\n
\n

If more, sometimes we will select some implementations that are more deterministic, but slower. In particular, on the GPU, we will avoid using AtomicAdd. Sometimes we will still use non-deterministic implementation, e.g. when we do not have a GPU implementation that is deterministic. Also, see the dnn.conv.algo* flags to cover more cases.

\n
\n

In TensorFlow, the realization of the need for determinism has been rather late, but it's slowly getting there \u2013 helped by the advance of CuDNN on that front also. For a long time, reductions have been non-deterministic, but now they seem to be deterministic. The fact that CuDNN introduced deterministic reductions in version 6.0 may have helped of course.

\n

It seems that currently, the main obstacle for TensorFlow towards determinism is the backward pass of the convolution. It is indeed one of the few operations for which CuDNN proposes a non-deterministic algorithm, labeled CUDNN_CONVOLUTION_BWD_FILTER_ALGO_0. This algorithm is still in the list of possible choices for the backward filter in TensorFlow. And since the choice of the filter seems to be based on performance, it could indeed be picked if it is more efficient. (I am not so familiar with TensorFlow's C++ code so take this with a grain of salt.)

\n

Is this important?

\n

If you are debugging an issue, determinism is not merely important: it is mandatory. You need to reproduce the steps that led to a problem. This is currently a real issue with toolkits like TensorFlow. To mitigate this problem, your only option is to debug live, adding checks and breakpoints at the correct locations \u2013 not great.

\n

Deployment is another aspect of things, where it is often desirable to have a deterministic behavior, in part for human acceptance. While nobody would reasonably expect a medical diagnosis algorithm to never fail, it would be awkward that a computer could give the same patient a different diagnosis depending on the run. (Although doctors themselves are not immune to this kind of variability.)

\n

Those reasons are rightful motivations to fix non-determinism in neural networks.

\n

For all other aspects, I would say that we need to accept, if not embrace, the non-deterministic nature of neural net training. For all purposes, training is stochastic. We use stochastic gradient descent, shuffle data, use random initialization and dropout \u2013 and more importantly, training data is itself but a random sample of data. From that standpoint, the fact that computers can only generate pseudo-random numbers with a seed is an artifact. When you train, your loss is a value that also comes with a confidence interval due to this stochastic nature. Comparing those values to optimize hyper-parameters while ignoring those confidence intervals does not make much sense \u2013 therefore it is vain, in my opinion, to spend too much effort fixing non-determinism in that, and many other, cases.

\n", + "system": "" + }, + { + "instruction": "TensorFlow : failed call to cuInit: CUDA_ERROR_NO_DEVICE", + "input": "", + "output": "

The issue was solved on GitHub. This error message will be shown if you set an invalid value for the CUDA_VISIBLE_DEVICES environment variable, e.g. when you only have a single GPU (which has ID 0) and set CUDA_VISIBLE_DEVICES=1 or CUDA_VISIBLE_DEVICES=2.

\n", + "system": "" + }, + { + "instruction": "How to Suppress Tensorflow warning displayed in result", + "input": "", + "output": "

After searching hours together i found answer from Stackoverflow itself, where the answer is provided for different issue. And that solution worked here as well.

\n

Here is the solution for TF 1.x:

\n
tf.logging.set_verbosity(tf.logging.ERROR)\n
\n

For TF 2.x:

\n
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)\n
\n

Source:\nIs there a way to suppress the messages TensorFlow prints?

\n", + "system": "" + }, + { + "instruction": "How to improve accuracy of Tensorflow camera demo on iOS for retrained graph", + "input": "", + "output": "

Since you are not using YOLO Detector the MAINTAIN_ASPECT flag is set to false. Hence the image on Android app is not getting cropped, but it's scaled. However, in the code snippet provided I don't see the actual initialisation of the flag. Confirm that the value of the flag is actually false in your app.

\n\n

I know this isn't a complete solution but hope this helps you in debugging the issue.

\n", + "system": "" + }, + { + "instruction": "Very low GPU usage during training in Tensorflow", + "input": "", + "output": "

MNIST size networks are tiny and it's hard to achieve high GPU (or CPU) efficiency for them, I think 30% is not unusual for your application. You will get higher computational efficiency with larger batch size, meaning you can process more examples per second, but you will also get lower statistical efficiency, meaning you need to process more examples total to get to target accuracy. So it's a trade-off. For tiny character models like yours, the statistical efficiency drops off very quickly after a 100, so it's probably not worth trying to grow the batch size for training. For inference, you should use the largest batch size you can.

\n", + "system": "" + }, + { + "instruction": "TensorFlow Variables and Constants", + "input": "", + "output": "

In TensorFlow the differences between constants and variables are that when you declare some constant, its value can't be changed in the future (also the initialization should be with a value, not with operation).

\n\n

Nevertheless, when you declare a Variable, you can change its value in the future with tf.assign() method (and the initialization can be achieved with a value or operation).

\n\n

The function tf.global_variables_initializer() initialises all variables in your code with the value passed as parameter, but it works in async mode, so doesn't work properly when dependencies exists between variables.

\n\n

Your first code (#1) works properly because there is no dependencies on variable initialization and the constant is constructed with a value.

\n\n

The second code (#2) doesn't work because of the async behavior of tf.global_variables_initializer(). You can fix it using tf.variables_initializer() as follows:

\n\n
x = tf.Variable(35, name='x')\nmodel_x = tf.variables_initializer([x])\n\ny = tf.Variable(x + 5, name='y')\nmodel_y = tf.variables_initializer([y])\n\n\nwith tf.Session() as session:\n   session.run(model_x)\n   session.run(model_y)\n   print(session.run(y))\n
\n\n

The third code (#3) doesn't work properly because you are trying to initialize a constant with an operation, that isn't possible. To solve it, an appropriate strategy is (#1).

\n\n

Regarding to your last question. You need to run (a) session.run(model) when there are variables in your calculation graph (b) print(session.run(y)).

\n", + "system": "" + }, + { + "instruction": "Python / Tensorflow - Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304", + "input": "", + "output": "

Let's come to your original error:

\n\n
\n

Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304

\n
\n\n

This is because you adapt your code from a code with original input image size 24*24. The tensor shape after two convolution and two max-pooling layers is [-1, 6, 6, 64]. However, as your input image shape is 150*150, the intermediate shape becomes [-1, 38, 38, 64].

\n\n

try change w3

\n\n
\n

w3 = tf.Variable(tf.random_normal([38*38*64, 1024]))

\n
\n\n

You should always keep an eye on your tensor shape flow.

\n", + "system": "" + }, + { + "instruction": "Convert Tensorflow model to Caffe model", + "input": "", + "output": "

I've had the same problem and found a solution. The code can be found here (https://github.com/lFatality/tensorflow2caffe) and I've also documented the code in some Youtube videos.

\n\n
\n\n

Part 1 covers the creation of the architecture of VGG-19 in Caffe and tflearn (higher level API for TensorFlow, with some changes to the code native TensorFlow should also work).

\n\n
\n\n

In Part 2 the export of the weights and biases out of the TensorFlow model into a numpy file is described. In tflearn you can get the weights of a layer like this:

\n\n
#get parameters of a certain layer\nconv2d_vars = tflearn.variables.get_layer_variables_by_name(layer_name)\n#get weights out of the parameters\nweights = model.get_weights(conv2d_vars[0])\n#get biases out of the parameters\nbiases = model.get_weights(conv2d_vars[1])\n
\n\n

For a convolutional layer, the layer_name is Conv_2D. Fully-Connected layers are called FullyConnected. If you use more than one layer of a certain type, a raising integer with a preceding underscore is used (e.g. the 2nd conv layer is called Conv_2D_1). I've found these names in the graph of the TensorBoard. If you name the layers in your architecture definition, then these layer_names might change to the names you defined.

\n\n

In native TensorFlow the export will need different code but the format of the parameters should be the same so subsequent steps should still be applicable.

\n\n
\n\n

Part 3 covers the actual conversion. What's critical is the conversion of the weights when you create the caffemodel (the biases can be carried over without change). TensorFlow and Caffe use different formats when saving a filter. While TensorFlow uses [height, width, depth, number of filters] (TensorFlow docs, at the bottom), Caffe uses [number of filters, depth, height, width] (Caffe docs, chapter 'Blob storage and communication'). To convert between the formats you can use the transpose function (for example: weights_of_first_conv_layer.transpose((3,2,0,1)). The 3,2,0,1 sequence can be obtained by enumerating the TensorFlow format (origin) and then switching it to the Caffe format (target format) while keeping the numbers at their specific variable.).
\nIf you want to connect a tensor output to a fully-connected layer, things get a little tricky. If you use VGG-19 with an input size of 112x112 it looks like this.

\n\n
fc1_weights = data_file[16][0].reshape((4,4,512,4096))\nfc1_weights = fc1_w.transpose((3,2,0,1))\nfc1_weights = fc1_w.reshape((4096,8192))\n
\n\n

What you get from TensorFlow if you export the parameters at the connection between tensor and fully-connected layer is an array with the shape [entries in the tensor, units in the fc-layer] (here: [8192, 4096]). You have to find out what the shape of your output tensor is and then reshape the array so that it fits the TensorFlow format (see above, number of filters being the number of units in the fc-layer). After that you use the transpose-conversion you've used previously and then reshape the array again, but the other way around. While TensorFlow saves fc-layer weights as [number of inputs, number of outputs], Caffe does it the other way around.
\nIf you connect two fc-layers to each other, you don't have to do the complex process previously described but you will have to account for the different fc-layer format by transposing again (fc_layer_weights.transpose((1,0)))

\n\n

You can then set the parameters of the network using

\n\n
net.params['layer_name_in_prototxt'][0].data[...] = weights\nnet.params['layer_name_in_prototxt'][1].data[...] = biases\n
\n\n

This was a quick overview. If you want all the code, it's in my github repository. I hope it helps. :)

\n\n
\n\n

Cheers,
\nFatality

\n", + "system": "" + }, + { + "instruction": "How to train TensorFlow network using a generator to produce inputs?", + "input": "", + "output": "

Suppose you have a function that generates data:

\n\n\n\n
 def generator(data): \n    ...\n    yield (X, y)\n
\n\n

Now you need another function that describes your model architecture. It could be any function that processes X and has to predict y as output (say, neural network).

\n\n

Suppose your function accepts X and y as inputs, computes a prediction for y from X in some way and returns loss function (e.g. cross-entropy or MSE in the case of regression) between y and predicted y:

\n\n\n\n
 def neural_network(X, y): \n    # computation of prediction for y using X\n    ...\n    return loss(y, y_pred)\n
\n\n

To make your model work, you need to define placeholders for both X and y and then run a session:

\n\n\n\n
 X = tf.placeholder(tf.float32, shape=(batch_size, x_dim))\n y = tf.placeholder(tf.float32, shape=(batch_size, y_dim))\n
\n\n

Placeholders are something like \"free variables\" which you need to specify when running the session by feed_dict:

\n\n\n\n
 with tf.Session() as sess:\n     # variables need to be initialized before any sess.run() calls\n     tf.global_variables_initializer().run()\n\n     for X_batch, y_batch in generator(data):\n         feed_dict = {X: X_batch, y: y_batch} \n         _, loss_value, ... = sess.run([train_op, loss, ...], feed_dict)\n         # train_op here stands for optimization operation you have defined\n         # and loss for loss function (return value of neural_network function)\n
\n\n

Hope you would find it useful. However, bear in mind this is not fully working implementation but rather a pseudocode since you specified almost no details.

\n", + "system": "" + }, + { + "instruction": "How do I find out the version of TensorFlow on my computer?", + "input": "", + "output": "
import tensorflow as tf\ntf.__version__\n
\n", + "system": "" + }, + { + "instruction": "How to create an optimizer in Tensorflow", + "input": "", + "output": "

The simplest example of an optimizer is probably the gradient descent optimizer. It shows how one creates an instance of the basic optimizer class. The optimizer base class documentation explains what the methods do.

\n\n

The python side of the optimizers adds new nodes to the graph that compute and apply the gradients being back-propagated. It supplies the parameters that get passed to the ops and does some of the high-level management of the optimizer. Then, you need the actual \"Apply\" op.

\n\n

Ops have both a python and a C++ component. Writing a training op is the same (but specialized) as the general process of adding an Op to TensorFlow.

\n\n

For an example set of training ops that compute and apply gradients, see\npython/training/training_ops.py - this is the Python glue for the actual training ops. Note that the code here is mostly about shape inference - the computation is going to be in the C++.

\n\n

The actual math for applying the gradients is handled by an Op (recalling that, in general, ops are written in C++). In this case, the apply gradients ops are defined in core/kernels/training_ops.cc. You can see, for example, the implementation of ApplyGradientDescentOp in there, which references a functor ApplyGradientDescent:

\n\n
var.device(d) -= grad * lr();\n
\n\n

The implementation of the Op itself follows the implementation of any other op as described in the adding-an-op docs.

\n", + "system": "" + }, + { + "instruction": "Machine Learning (tensorflow / sklearn) in Django?", + "input": "", + "output": "

Asynchronous processing

\n\n

If you don't need the classification result from the ML code to pass immediately to the user (e.g. as a response to the same POST request that submtted), then you can always queue the classification job to be ran in the background or even a different server with more CPU/memory resources (e.g. with django-background-tasks or Celery)

\n\n

A queued task would be for example to populate the field UserResponse.class_name (positive, negative) on the database rows that have that field blank (not yet classified)

\n\n

Real time notification

\n\n

If the ML code is slow and want to return that result to the user as soon as it is available, you can use the asynchronous approach described above, and pair with the real time notification (e.g. socket.io to the browser (this can be triggered from the queued task)

\n\n

This becomes necessary if ML execution time is so long that it might time-out the HTTP request in the synchronous approach described below.

\n\n

Synchronous processing, if ML code is not CPU intensive (fast enough)

\n\n

If you need that classification result returned immediately, and the ML classification is fast enough *, you can do so within the HTTP request-response cycle (the POST request returns after the ML code is done, synchronously)

\n\n

*Fast enough here means it wouldn't time-out the HTTP request/response, and the user wouldn't lose patience.

\n", + "system": "" + }, + { + "instruction": "ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32", + "input": "", + "output": "

It sounds like you have defined input_y\u2014which I am assuming is a tf.placeholder()\u2014as having type tf.int32. Either change this to tf.float32 or add a cast: tf.cast(input_y, tf.float32) or tf.to_float(input_y).

\n", + "system": "" + }, + { + "instruction": "Choosing from different cost function and activation function of a neural network", + "input": "", + "output": "

I will answer your questions a little bit out of order, starting with more general answers, and finishing with those specific to your particular experiment.

\n\n

Activation functions Different activation functions, in fact, do have different properties. Let's first consider an activation function between two layers of a neural network. The only purpose of an activation function there is to serve as an nonlinearity. If you do not put an activation function between two layers, then two layers together will serve no better than one, because their effect will still be just a linear transformation. For a long while people were using sigmoid function and tanh, choosing pretty much arbitrarily, with sigmoid being more popular, until recently, when ReLU became the dominant nonleniarity. The reason why people use ReLU between layers is because it is non-saturating (and is also faster to compute). Think about the graph of a sigmoid function. If the absolute value of x is large, then the derivative of the sigmoid function is small, which means that as we propagate the error backwards, the gradient of the error will vanish very quickly as we go back through the layers. With ReLU the derivative is 1 for all positive inputs, so the gradient for those neurons that fired will not be changed by the activation unit at all and will not slow down the gradient descent.

\n\n

For the last layer of the network the activation unit also depends on the task. For regression you will want to use the sigmoid or tanh activation, because you want the result to be between 0 and 1. For classification you will want only one of your outputs to be one and all others zeros, but there's no differentiable way to achieve precisely that, so you will want to use a softmax to approximate it.

\n\n

Your example. Now let's look at your example. Your first example tries to compute the output of AND in a following form:

\n\n
sigmoid(W1 * x1 + W2 * x2 + B)\n
\n\n

Note that W1 and W2 will always converge to the same value, because the output for (x1, x2) should be equal to the output of (x2, x1). Therefore, the model that you are fitting is:

\n\n
sigmoid(W * (x1 + x2) + B)\n
\n\n

x1 + x2 can only take one of three values (0, 1 or 2) and you want to return 0 for the case when x1 + x2 < 2 and 1 for the case when x1 + x2 = 2. Since the sigmoid function is rather smooth, it will take very large values of W and B to make the output close to the desired, but because of a small learning rate they can't get to those large values fast. Increasing the learning rate in your first example will increase the speed of convergence.

\n\n

Your second example converges better because the softmax function is good at making precisely one output be equal to 1 and all others to 0. Since this is precisely your case, it does converge quickly. Note that sigmoid would also eventually converge to good values, but it will take significantly more iterations (or higher learning rate).

\n\n

What to use. Now to the last question, how does one choose which activation and cost functions to use. These advices will work for majority of cases:

\n\n
    \n
  1. If you do classification, use softmax for the last layer's nonlinearity and cross entropy as a cost function.

  2. \n
  3. If you do regression, use sigmoid or tanh for the last layer's nonlinearity and squared error as a cost function.

  4. \n
  5. Use ReLU as a nonlienearity between layers.

  6. \n
  7. Use better optimizers (AdamOptimizer, AdagradOptimizer) instead of GradientDescentOptimizer, or use momentum for faster convergence,

  8. \n
\n", + "system": "" + }, + { + "instruction": "Is sparse tensor multiplication implemented in TensorFlow?", + "input": "", + "output": "

General-purpose multiplication for tf.SparseTensor is not currently implemented in TensorFlow. However, there are three partial solutions, and the right one to choose will depend on the characteristics of your data:

\n\n\n", + "system": "" + }, + { + "instruction": "Tensorflow2 warning using @tffunction", + "input": "", + "output": "

tf.function has some \"peculiarities\". I highly recommend reading this article: https://www.tensorflow.org/tutorials/customization/performance

\n\n

In this case, the problem is that the function is \"retraced\" (i.e. a new graph is built) every time you call with a different input signature. For tensors, input signature refers to shape and dtype, but for Python numbers, every new value is interpreted as \"different\". In this case, because you call the function with a step variable that changes every time, the function is retraced every single time as well. This will be extremely slow for \"real\" code (e.g. calling a model inside the function).

\n\n

You can fix it by simply converting step to a tensor, in which case the different values will not count as a new input signature:

\n\n
for step in range(100):\n    step = tf.convert_to_tensor(step, dtype=tf.int64)\n    my_func(step)\n    writer.flush()\n
\n\n

or use tf.range to get tensors directly:

\n\n
for step in tf.range(100):\n    step = tf.cast(step, tf.int64)\n    my_func(step)\n    writer.flush()\n
\n\n

This should not produce warnings (and be much faster).

\n", + "system": "" + }, + { + "instruction": "What is the proper way to weight decay for Adam Optimizer", + "input": "", + "output": "

Edit: see also this PR which just got merged into TF.

\n\n

When using pure SGD (without momentum) as an optimizer, weight decay is the same thing as adding a L2-regularization term to the loss. When using any other optimizer, this is not true.

\n\n

Weight decay (don't know how to TeX here, so excuse my pseudo-notation):

\n\n
w[t+1] = w[t] - learning_rate * dw - weight_decay * w\n
\n\n

L2-regularization:

\n\n
loss = actual_loss + lambda * 1/2 sum(||w||_2 for w in network_params)\n
\n\n

Computing the gradient of the extra term in L2-regularization gives lambda * w and thus inserting it into the SGD update equation

\n\n
dloss_dw = dactual_loss_dw + lambda * w\nw[t+1] = w[t] - learning_rate * dw\n
\n\n

gives the same as weight decay, but mixes lambda with the learning_rate. Any other optimizer, even SGD with momentum, gives a different update rule for weight decay as for L2-regularization! See the paper Fixing weight decay in Adam for more details. (Edit: AFAIK, this 1987 Hinton paper introduced \"weight decay\", literally as \"each time the weights are updated, their magnitude is also decremented by 0.4%\" at page 10)

\n\n

That being said, there doesn't seem to be support for \"proper\" weight decay in TensorFlow yet. There are a few issues discussing it, specifically because of above paper.

\n\n

One possible way to implement it is by writing an op that does the decay step manually after every optimizer step. A different way, which is what I'm currently doing, is using an additional SGD optimizer just for the weight decay, and \"attaching\" it to your train_op. Both of these are just crude work-arounds, though. My current code:

\n\n
# In the network definition:\nwith arg_scope([layers.conv2d, layers.dense],\n               weights_regularizer=layers.l2_regularizer(weight_decay)):\n    # define the network.\n\nloss = # compute the actual loss of your problem.\ntrain_op = optimizer.minimize(loss, global_step=global_step)\nif args.weight_decay not in (None, 0):\n    with tf.control_dependencies([train_op]):\n        sgd = tf.train.GradientDescentOptimizer(learning_rate=1.0)\n        train_op = sgd.minimize(tf.add_n(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)))\n
\n\n

This somewhat makes use of TensorFlow's provided bookkeeping. Note that the arg_scope takes care of appending an L2-regularization term for every layer to the REGULARIZATION_LOSSES graph-key, which I then all sum up and optimize using SGD which, as shown above, corresponds to actual weight-decay.

\n\n

Hope that helps, and if anyone gets a nicer code snippet for this, or TensorFlow implements it better (i.e. in the optimizers), please share.

\n", + "system": "" + }, + { + "instruction": "tensorflow: what's the difference between tf.nn.dropout and tf.layers.dropout", + "input": "", + "output": "

A quick glance through \ntensorflow/python/layers/core.py and tensorflow/python/ops/nn_ops.py\nreveals that tf.layers.dropout is a wrapper for tf.nn.dropout.

\n\n

The only differences in the two functions are:

\n\n
    \n
  1. The tf.nn.dropout has parameter keep_prob: \"Probability that each element is kept\"
    tf.layers.dropout has parameter rate: \"The dropout rate\"
    Thus, keep_prob = 1 - rate as defined here
  2. \n
  3. The tf.layers.dropout has training parameter: \"Whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched).\"
  4. \n
\n", + "system": "" + }, + { + "instruction": "How does asynchronous training work in distributed Tensorflow?", + "input": "", + "output": "

When you train asynchronously in Distributed TensorFlow, a particular worker does the following:

\n\n
    \n
  1. The worker reads all of the shared model parameters in parallel from the PS task(s), and copies them to the worker task. These reads are uncoordinated with any concurrent writes, and no locks are acquired: in particular the worker may see partial updates from one or more other workers (e.g. a subset of the updates from another worker may have been applied, or a subset of the elements in a variable may have been updated).

  2. \n
  3. The worker computes gradients locally, based on a batch of input data and the parameter values that it read in step 1.

  4. \n
  5. The worker sends the gradients for each variable to the appropriate PS task, and applies the gradients to their respective variable, using an update rule that is determined by the optimization algorithm (e.g. SGD, SGD with Momentum, Adagrad, Adam, etc.). The update rules typically use (approximately) commutative operations, so they may be applied independently on the updates from each worker, and the state of each variable will be a running aggregate of the sequence of updates received.

  6. \n
\n\n

In asynchronous training, each update from the worker is applied concurrently, and the updates may be somewhat coordinated if the optional use_locking=True flag was set when the respective optimizer (e.g. tf.train.GradientDescentOptimizer) was initialized. Note however that the locking here only provides mutual exclusion for two concurrent updates, and (as noted above) reads do not acquire locks; the locking does not provide atomicity across the entire set of updates.

\n\n

(By contrast, in synchronous training, a utility like tf.train.SyncReplicasOptimizer will ensure that all of the workers read the same, up-to-date values for each model parameter; and that all of the updates for a synchronous step are aggregated before they are applied to the underlying variables. To do this, the workers are synchronized by a barrier, which they enter after sending their gradient update, and leave after the aggregated update has been applied to all variables.)

\n", + "system": "" + }, + { + "instruction": "Tensorflow cannot open libcuda.so.1", + "input": "", + "output": "

libcuda.so.1 is a symlink to a file that is specific to the version of your NVIDIA drivers. It may be pointing to the wrong version or it may not exist.

\n\n
# See where the link is pointing.  \nls  /usr/lib/x86_64-linux-gnu/libcuda.so.1 -la\n# My result:\n# lrwxrwxrwx 1 root root 19 Feb 22 20:40 \\\n# /usr/lib/x86_64-linux-gnu/libcuda.so.1 -> ./libcuda.so.375.39\n\n# Make sure it is pointing to the right version. \n# Compare it with the installed NVIDIA driver.\nnvidia-smi\n\n# Replace libcuda.so.1 with a link to the correct version\ncd /usr/lib/x86_64-linux-gnu\nsudo ln -f -s libcuda.so.<yournvidia.version> libcuda.so.1\n
\n\n

Now in the same way, make another symlink from libcuda.so.1 to a link of the same name in your LD_LIBRARY_PATH directory.

\n\n

You may also find that you need to create a link to libcuda.so.1 in /usr/lib/x86_64-linux-gnu named libcuda.so

\n", + "system": "" + }, + { + "instruction": "What is the proper way to weight decay for Adam Optimizer", + "input": "", + "output": "

Edit: see also this PR which just got merged into TF.

\n\n

When using pure SGD (without momentum) as an optimizer, weight decay is the same thing as adding a L2-regularization term to the loss. When using any other optimizer, this is not true.

\n\n

Weight decay (don't know how to TeX here, so excuse my pseudo-notation):

\n\n
w[t+1] = w[t] - learning_rate * dw - weight_decay * w\n
\n\n

L2-regularization:

\n\n
loss = actual_loss + lambda * 1/2 sum(||w||_2 for w in network_params)\n
\n\n

Computing the gradient of the extra term in L2-regularization gives lambda * w and thus inserting it into the SGD update equation

\n\n
dloss_dw = dactual_loss_dw + lambda * w\nw[t+1] = w[t] - learning_rate * dw\n
\n\n

gives the same as weight decay, but mixes lambda with the learning_rate. Any other optimizer, even SGD with momentum, gives a different update rule for weight decay as for L2-regularization! See the paper Fixing weight decay in Adam for more details. (Edit: AFAIK, this 1987 Hinton paper introduced \"weight decay\", literally as \"each time the weights are updated, their magnitude is also decremented by 0.4%\" at page 10)

\n\n

That being said, there doesn't seem to be support for \"proper\" weight decay in TensorFlow yet. There are a few issues discussing it, specifically because of above paper.

\n\n

One possible way to implement it is by writing an op that does the decay step manually after every optimizer step. A different way, which is what I'm currently doing, is using an additional SGD optimizer just for the weight decay, and \"attaching\" it to your train_op. Both of these are just crude work-arounds, though. My current code:

\n\n
# In the network definition:\nwith arg_scope([layers.conv2d, layers.dense],\n               weights_regularizer=layers.l2_regularizer(weight_decay)):\n    # define the network.\n\nloss = # compute the actual loss of your problem.\ntrain_op = optimizer.minimize(loss, global_step=global_step)\nif args.weight_decay not in (None, 0):\n    with tf.control_dependencies([train_op]):\n        sgd = tf.train.GradientDescentOptimizer(learning_rate=1.0)\n        train_op = sgd.minimize(tf.add_n(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)))\n
\n\n

This somewhat makes use of TensorFlow's provided bookkeeping. Note that the arg_scope takes care of appending an L2-regularization term for every layer to the REGULARIZATION_LOSSES graph-key, which I then all sum up and optimize using SGD which, as shown above, corresponds to actual weight-decay.

\n\n

Hope that helps, and if anyone gets a nicer code snippet for this, or TensorFlow implements it better (i.e. in the optimizers), please share.

\n", + "system": "" + }, + { + "instruction": "Anaconda ImportError: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found", + "input": "", + "output": "

I spent a day working on this having encountered the same exact problem working on my research university's computing cluster with the same specs as you, and I finally found the right Stack Overflow thread. None of the above answers here work, unfortunately, but I can say with very high confidence that the details in the linked thread should solve your problem even though the source of the error traceback was different.

\n

To summarize, you'll need to add the path to the lib folder in anaconda to LD_LIBRARY_PATH:

\n
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/your/path/to/conda/env/lib\n
\n

In my case, I just did:

\n
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/anaconda3/lib\n
\n

...and it worked like a charm!

\n", + "system": "" + }, + { + "instruction": "How to change python version in Anaconda?", + "input": "", + "output": "

A better (recommended) alternative is to create a virtual environment of the desired Python version and then use that environment to run Tensorflow and other scripts.

\n\n

To do that, you can follow the instructions given here.

\n\n

BUT, if you don't want to create a separate environment, then conda install python=<version> should do.

\n\n

OR (not recommended) you can download the \"latest\" Anaconda installer with your required Python version bundled.

\n\n

Source

\n", + "system": "" + }, + { + "instruction": "Does TensorFlow plan to support OpenCL?", + "input": "", + "output": "

As part of contrib, you can build Tensorflow with SYCL support.

\n\n

SYCL is \"single source OpenCL\", a new standard from Khronos that allows one to write high level C++ code that can be compiled to run on OpenCL devices.

\n\n

The folks at CodePlay software have been heavily involved in it, and you can see one their blogposts on the topic here.

\n\n

So in short, you're not going to get a pip package of it, you'll need to build it yourself. And the performance might not be as good, since the project is still early days.

\n\n

You can find a tutorial on how to get started here. Bear in mind, this uses CodePlay's propitiatory version of SYCL, but maybe you can get it working with an open implementation such as triSYCL.

\n", + "system": "" + }, + { + "instruction": "get the CUDA and CUDNN version on windows with Anaconda installe", + "input": "", + "output": "

Use the following command to check CUDA installation by Conda:

\n\n
conda list cudatoolkit\n
\n\n

And the following command to check CUDNN version installed by conda:

\n\n
conda list cudnn\n
\n\n

If you want to install/update CUDA and CUDNN through CONDA, please use the following commands:

\n\n
conda install -c anaconda cudatoolkit\nconda install -c anaconda cudnn\n
\n\n

Alternatively you can use following commands to check CUDA installation:

\n\n
nvidia-smi\n
\n\n

OR

\n\n
nvcc --version\n
\n", + "system": "" + }, + { + "instruction": "How to know which version of docker image is behind latest tag?", + "input": "", + "output": "

go to image webpage (nigix in my case) https://hub.docker.com/_/nginx\nthen press tags tab,\ngo to any latest, and copy sha256 sum\nthen sort by newest, then scroll down until first numbered version\nand check if the exact same sha256 is displayed

\n

now ... STILL after that fishing, there library/nginxit comes a sure thing:

\n

you can verify if you did it right, for example now I manage to find that nginx:latest is actually 1.17.8, so, I run:

\n
docker pull nginx:1.17.8\n1.17.8: Pulling from library/nginx\nbc51dd8edc1b: Pull complete\n66ba67045f57: Pull complete\nbf317aa10aa5: Pull complete\nDigest:sha256:ad5552c786f128e389a0263104ae39f3d3c7895579d45ae716f528185b36bc6f\nStatus: Downloaded newer image for nginx:1.17.8\n
\n

and then I verify by atempt to pull latest:

\n
docker pull nginx:latest\nlatest: Pulling from library/nginx\nDigest: sha256:ad5552c786f128e389a0263104ae39f3d3c7895579d45ae716f528185b36bc6f\nStatus: Downloaded newer image for nginx:latest\n
\n

how you can see it didn't actually pull anything, and sha256 is the exact same ;)

\n", + "system": "" + }, + { + "instruction": "How to load a tflite model in script?", + "input": "", + "output": "

You can use TensorFlow Lite Python interpreter to load the tflite model in a python shell, and test it with your input data.

\n\n

The code will be like this:

\n\n
import numpy as np\nimport tensorflow as tf\n\n# Load TFLite model and allocate tensors.\ninterpreter = tf.lite.Interpreter(model_path=\"converted_model.tflite\")\ninterpreter.allocate_tensors()\n\n# Get input and output tensors.\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\n\n# Test model on random input data.\ninput_shape = input_details[0]['shape']\ninput_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)\ninterpreter.set_tensor(input_details[0]['index'], input_data)\n\ninterpreter.invoke()\n\n# The function `get_tensor()` returns a copy of the tensor data.\n# Use `tensor()` in order to get a pointer to the tensor.\noutput_data = interpreter.get_tensor(output_details[0]['index'])\nprint(output_data)\n
\n\n

The above code is from TensorFlow Lite official guide, for more detailed information, read this.

\n", + "system": "" + }, + { + "instruction": "What is regularization loss in tensorflow?", + "input": "", + "output": "

TL;DR: it's just the additional loss generated by the regularization function. Add that to the network's loss and optimize over the sum of the two.

\n\n

As you correctly state, regularization methods are used to help an optimization method to generalize better.\nA way to obtain this is to add a regularization term to the loss function. This term is a generic function, which modifies the \"global\" loss (as in, the sum of the network loss and the regularization loss) in order to drive the optimization algorithm in desired directions.

\n\n

Let's say, for example, that for whatever reason I want to encourage solutions to the optimization that have weights as close to zero as possible. One approach, then, is to add to the loss produced by the network, a function of the network weights (for example, a scaled-down sum of all the absolute values of the weights). Since the optimization algorithm minimizes the global loss, my regularization term (which is high when the weights are far from zero) will push the optimization towards solutions tht have weights close to zero.

\n", + "system": "" + }, + { + "instruction": "In TensorFlow, what is the argument 'axis' in the function 'tf.one_hot'", + "input": "", + "output": "

Here's an example:

\n\n\n\n
x = tf.constant([0, 1, 2])\n
\n\n

... is the input tensor and N=4 (each index is transformed into 4D vector).

\n\n

axis=-1

\n\n

Computing one_hot_1 = tf.one_hot(x, 4).eval() yields a (3, 4) tensor:

\n\n
[[ 1.  0.  0.  0.]\n [ 0.  1.  0.  0.]\n [ 0.  0.  1.  0.]]\n
\n\n

... where the last dimension is one-hot encoded (clearly visible). This corresponds to the default axis=-1, i.e. the last one.

\n\n

axis=0

\n\n

Now, computing one_hot_2 = tf.one_hot(x, 4, axis=0).eval() yields a (4, 3) tensor, which is not immediately recognizable as one-hot encoded:

\n\n
[[ 1.  0.  0.]\n [ 0.  1.  0.]\n [ 0.  0.  1.]\n [ 0.  0.  0.]]\n
\n\n

This is because the one-hot encoding is done along the 0-axis and one has to transpose the matrix to see the previous encoding. The situation becomes more complicated, when the input is higher dimensional, but the idea is the same: the difference is in placement of the extra dimension used for one-hot encoding.

\n", + "system": "" + }, + { + "instruction": "Best strategy to reduce false positives: Google's new Object Detection API on Satellite Imagery", + "input": "", + "output": "

I've revisited this topic recently in my work and thought I'd update with my current learnings for any who visit in the future.

\n\n

The topic appeared on Tensorflow's Models repo issue tracker. SSD allows you to set the ratio of how many negative:postive examples to mine (max_negatives_per_positive: 3), but you can also set a minimum number for images with no postives (min_negatives_per_image: 3). Both of these are defined in the model-ssd-loss config section.

\n\n

That said, I don't see the same option in Faster-RCNN's model configuration. It's mentioned in the issue that models/research/object_detection/core/balanced_positive_negative_sampler.py contains the code used for Faster-RCNN.

\n\n

One other option discussed in the issue is creating a second class specifically for lookalikes. During training, the model will attempt to learn class differences which should help serve your purpose.

\n\n

Lastly, I came across this article on Filter Amplifier Networks (FAN) that may be informative for your work on aerial imagery.

\n\n

===================================================================

\n\n

The following paper describes hard negative mining for the same purpose you describe:\nTraining Region-based Object Detectors with Online Hard Example Mining

\n\n

In section 3.1 they describe using a foreground and background class:

\n\n
\n

Background RoIs. A region is labeled background (bg) if its maximum\n IoU with ground truth is in the interval [bg lo, 0.5). A lower\n threshold of bg lo = 0.1 is used by both FRCN and SPPnet, and is\n hypothesized in [14] to crudely approximate hard negative mining; the\n assumption is that regions with some overlap with the ground truth are\n more likely to be the confusing or hard ones. We show in Section 5.4\n that although this heuristic helps convergence and detection accuracy,\n it is suboptimal because it ignores some infrequent, but important,\n difficult background regions. Our method removes the bg lo threshold.

\n
\n\n

In fact this paper is referenced and its ideas are used in Tensorflow's object detection losses.py code for hard mining:

\n\n
class HardExampleMiner(object):\n\"\"\"Hard example mining for regions in a list of images.\nImplements hard example mining to select a subset of regions to be\nback-propagated. For each image, selects the regions with highest losses,\nsubject to the condition that a newly selected region cannot have\nan IOU > iou_threshold with any of the previously selected regions.\nThis can be achieved by re-using a greedy non-maximum suppression algorithm.\nA constraint on the number of negatives mined per positive region can also be\nenforced.\nReference papers: \"Training Region-based Object Detectors with Online\nHard Example Mining\" (CVPR 2016) by Srivastava et al., and\n\"SSD: Single Shot MultiBox Detector\" (ECCV 2016) by Liu et al.\n\"\"\"\n
\n\n

Based on your model config file, the HardMinerObject is returned by losses_builder.py in this bit of code:

\n\n
def build_hard_example_miner(config,\n                            classification_weight,\n                            localization_weight):\n\"\"\"Builds hard example miner based on the config.\nArgs:\n    config: A losses_pb2.HardExampleMiner object.\n    classification_weight: Classification loss weight.\n    localization_weight: Localization loss weight.\nReturns:\n    Hard example miner.\n\"\"\"\nloss_type = None\nif config.loss_type == losses_pb2.HardExampleMiner.BOTH:\n    loss_type = 'both'\nif config.loss_type == losses_pb2.HardExampleMiner.CLASSIFICATION:\n    loss_type = 'cls'\nif config.loss_type == losses_pb2.HardExampleMiner.LOCALIZATION:\n    loss_type = 'loc'\n\nmax_negatives_per_positive = None\nnum_hard_examples = None\nif config.max_negatives_per_positive > 0:\n    max_negatives_per_positive = config.max_negatives_per_positive\nif config.num_hard_examples > 0:\n    num_hard_examples = config.num_hard_examples\nhard_example_miner = losses.HardExampleMiner(\n    num_hard_examples=num_hard_examples,\n    iou_threshold=config.iou_threshold,\n    loss_type=loss_type,\n    cls_loss_weight=classification_weight,\n    loc_loss_weight=localization_weight,\n    max_negatives_per_positive=max_negatives_per_positive,\n    min_negatives_per_image=config.min_negatives_per_image)\nreturn hard_example_miner\n
\n\n

which is returned by model_builder.py and called by train.py. So basically, it seems to me that simply generating your true positive labels (with a tool like LabelImg or RectLabel) should be enough for the train algorithm to find hard negatives within the same images. The related question gives an excellent walkthrough.

\n\n

In the event you want to feed in data that has no true positives (i.e. nothing should be classified in the image), just add the negative image to your tfrecord with no bounding boxes.

\n", + "system": "" + }, + { + "instruction": "Train Tensorflow Object Detection on own dataset", + "input": "", + "output": "

This assumes the module is already installed. Please refer to their documentation if not.

\n\n

Disclaimer

\n\n

This answer is not meant to be the right or only way of training the object detection module. This is simply I sharing my experience and what has worked for me. I'm open to suggestions and learning more about this as I am still new to ML in general.

\n\n

TL;DR

\n\n
    \n
  1. Create your own PASCAL VOC format dataset
  2. \n
  3. Generate TFRecords from it
  4. \n
  5. Configure a pipeline
  6. \n
  7. Visualize
  8. \n
\n\n

Each section of this answer consists of a corresponding Edit (see below). After reading each section, please read its Edit as well for clarifications. Corrections and tips were added for each section.

\n\n

Tools used

\n\n

LabelImg: A tool for creating PASCAL VOC format annotations.

\n\n

1. Create your own PASCAL VOC dataset

\n\n

PS: For simplicity, the folder naming convention of my answer follows that of Pascal VOC 2012

\n\n

A peek into the May 2012 dataset, you'll notice the folder as having the following structure

\n\n

\n+VOCdevkit\n +VOC2012\n +Annotations\n +ImageSets\n +Action\n +Layout\n +Main\n +Segmentation\n +JPEGImages\n +SegmentationClass\n +SegmentationObject\n

\n\n

For the time being, amendments were made to the following folders:

\n\n

Annotations: This is were all the images' corresponding XML files will be placed in. Use the suggested tool above to create the annotations. Do not worry about <truncated> and <difficulty> tags as they will be ignored by the training and eval binaries.

\n\n

JPEGImages: Location of your actual images. Make sure they are of type JPEG because that's what is currently supported in order to create TFRecords using their provided script.

\n\n

ImageSets->Main: This simply consists of text files. For each class, there exists a corresponding train.txt, trainval.txt and val.txt. Below is a sample of the contents of the aeroplane_train.txt in the VOC 2012 folder

\n\n
2008_000008 -1\n2008_000015 -1\n2008_000019 -1\n2008_000023 -1\n2008_000028 -1\n2008_000033  1\n
\n\n

The structure is basically image name followed by a boolean saying whether the corresponding object exists in that image or not. Take for example image 2008_000008 does not consist of an aeroplane hence marked with a -1 but image 2008_000033 does.

\n\n

I wrote a small Python script to generate these text files. Simply iterate through the image names and assign a 1 or -1 next to them for object existence. I added some randomness among my text files by shuffling the image names.

\n\n

The {classname}_val.txt files consist of the testing validation datasets. Think of this as the test data during training. You want to divide your dataset into training and validation. More info can be found here. The format of these files is similar to that of training.

\n\n

At this point, your folder structure should be

\n\n

\n+VOCdevkit\n +VOC2012\n +Annotations\n --(for each image, generated annotation)\n +ImageSets\n +Main\n --(for each class, generated *classname*_train.txt and *classname*_val.txt)\n +JPEGImages\n --(a bunch of JPEG images)\n

\n\n
\n\n

1.1 Generating label map

\n\n

With the dataset prepared, we need to create the corresponding label maps.\nNavigate to models/object_detection/data and open pascal_label_map.pbtxt.

\n\n

This file consists of a JSON that assigns an ID and name to each item. Make amendments to this file to reflect your desired objects.

\n\n
\n\n

2. Generate TFRecords

\n\n

If you look into their code especially this line, they explicitly grab the aeroplane_train.txt only. For curios minds, here's why. Change this file name to any of your class train text file.

\n\n

Make sure VOCdevkit is inside models/object_detection then you can go ahead and generate the TFRecords.

\n\n

Please go through their code first should you run into any problems. It is self explanatory and well documented.

\n\n
\n\n

3. Pipeline Configuration

\n\n

The instructions should be self explanatory to cover this segment. Sample configs can be found in object_detection/samples/configs.

\n\n

For those looking to train from scratch as I did, just make sure to remove the fine_tune_checkpoint and from_detection_checkpoint nodes. Here's what my config file looked like for reference.

\n\n

From here on you can continue with the tutorial and run the training process.

\n\n
\n\n

4. Visualize

\n\n

Be sure to run the eval in parallel to the training in order to be able to visualize the learning process. To quote Jonathan Huang

\n\n
\n

the best way is to just run the eval.py binary. We typically run this\n binary in parallel to training, pointing it at the directory holding\n the checkpoint that is being trained. The eval.py binary will write\n logs to an eval_dir that you specify which you can then point to\n with Tensorboard.

\n \n

You want to see that the mAP has \"lifted off\" in the first few hours,\n and then you want to see when it converges. It's hard to tell without\n looking at these plots how many steps you need.

\n
\n\n
\n\n

EDIT I (28 July '17):

\n\n

I never expected my response to get this much attention so I decided to come back and review it.

\n\n

Tools

\n\n

For my fellow Apple users, you could actually use RectLabel for annotations.

\n\n

Pascal VOC

\n\n

After digging around, I finally realized that trainval.txt is actually the union of training and validation datasets.

\n\n

Please look at their official development kit to understand the format even better.

\n\n

Label Map Generation

\n\n

At the time of my writing, ID 0 represents none_of_the_above. It is recommended that your IDs start from 1.

\n\n

Visualize

\n\n

After running your evaluation and directed tensorboard to your Eval directory, it'll show you the mAP of each category along with each category's performance. This is good but I like seeing my training data as well in parallel with Eval.

\n\n

To do this, run tensorboard on a different port and point it to your train directory

\n\n
tensorboard --logdir=${PATH_TO_TRAIN} --port=${DESIRED_NUMBER}\n
\n", + "system": "" + }, + { + "instruction": "What are the constraints for tensorflow scope names?", + "input": "", + "output": "

From the TF source:

\n\n
\n

NOTE: This constructor validates the given name. Valid scope

\n \n

names match one of the following regular expressions:

\n\n
[A-Za-z0-9.][A-Za-z0-9_.\\\\-/]* (for scopes at the root)\n[A-Za-z0-9_.\\\\-/]* (for other scopes)\n
\n
\n", + "system": "" + }, + { + "instruction": "Add L2 regularization when using high level tf.layers", + "input": "", + "output": "

You can pass them into tf.layers.conv2d as arguments:

\n\n
regularizer = tf.contrib.layers.l2_regularizer(scale=0.1)\nlayer2 = tf.layers.conv2d(\n    inputs,\n    filters,\n    kernel_size,\n    kernel_regularizer=regularizer)\n
\n\n

Then you should add the regularization loss to your loss like this:

\n\n
l2_loss = tf.losses.get_regularization_loss()\nloss += l2_loss\n
\n\n

Edit: Thanks Zeke Arneodo, Tom and srcolinas I added, the last bit on your feedback so that the accepted answer provides the complete solution.

\n", + "system": "" + }, + { + "instruction": "What does the function control_dependencies do?", + "input": "", + "output": "

control_dependencies is not a conditional. It is a mechanism to add dependencies to whatever ops you create in the with block. More specifically, what you specify in the argument to control_dependencies is ensured to be evaluated before anything you define in the with block.

\n\n

In your example, you don't add any (TensorFlow) operations in the with block, so the block does nothing.

\n\n

This answer has an example of how to use control_dependencies, where it is used to make sure the assignments happen before the batchnorm operations are evaluated.

\n", + "system": "" + }, + { + "instruction": "ImportError: libcudnn when running a TensorFlow program", + "input": "", + "output": "

Just download cuDNN 5.1 and follow the steps (Tested on Ubuntu 16.04, CUDA toolkit 8.0 )

\n\n
$ tar xvzf cudnn-8.0-linux-x64-v5.1-ga.tgz\n$ sudo cp -P cuda/include/cudnn.h /usr/local/cuda/include\n$ sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda/lib64\n$ sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*\n
\n\n

Now set Path variables

\n\n
$ vim ~/.bashrc\n\nexport LD_LIBRARY_PATH=\"$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64\"\nexport CUDA_HOME=/usr/local/cuda\n
\n\n

and done

\n\n

For more details, you can check this site

\n", + "system": "" + }, + { + "instruction": "Why would I ever use tf.concat instead of tf.stack?", + "input": "", + "output": "

Actually, I've misunderstood how tf.stack works. If the axis parameter is within the range of the existing dimensions, a new axis will be inserted at that index.

\n\n

Example:

\n\n
import tensorflow as tf\n\nt1 = tf.random_normal([1, 3])\nt2 = tf.random_normal([1, 3])\n\ntf.stack([t1, t2], axis=1).shape.as_list() == [1, 2, 3]\ntf.concat([t1, t2], axis=1).shape.as_list() == [1, 6]\n
\n", + "system": "" + }, + { + "instruction": "Cast string to float is not supported in Linear Model", + "input": "", + "output": "

I had the exact same problem, you need to make sure that the input data you are feeding the model is in the right format. ( not just the features but also the label column)

\n\n

My problem was that i was not skipping the first row in the data file, so i was trying to convert the titles to float format.Something as simple as adding

\n\n
skiprows=1\n
\n\n

When reading the csv:

\n\n
df_test = pd.read_csv(test_file, names=COLUMNS_TEST, skipinitialspace=True, skiprows=1, engine=\"python\")\n
\n\n

I would recommend you to check:

\n\n
df_test.dtypes\n
\n\n

You should get something like

\n\n
Feature1      int64\nFeature2      int64\nFeature3      int64\nFeature4      object\nFeature5      object\nFeature6      float64\ndtype: object\n
\n\n

If you are not getting the correct dtype then the model.fit is going to fail

\n", + "system": "" + }, + { + "instruction": "How to interpret TensorFlow output?", + "input": "", + "output": "

About NUMA -- https://software.intel.com/en-us/articles/optimizing-applications-for-numa

\n\n

Roughly speaking, if you have dual-socket CPU, they will each have their own memory and have to access the other processor's memory through a slower QPI link. So each CPU+memory is a NUMA node.

\n\n

Potentially you could treat two different NUMA nodes as two different devices and structure your network to optimize for different within-node/between-node bandwidth

\n\n

However, I don't think there's enough wiring in TF right now to do this right now. The detection doesn't work either -- I just tried on a machine with 2 NUMA nodes, and it still printed the same message and initialized to 1 NUMA node.

\n\n

DMA = Direct Memory Access. You could potentially copy things from one GPU to another GPU without utilizing CPU (ie, through NVlink). NVLink integration isn't there yet.

\n\n

As far as the error, TensorFlow tries to allocate close to GPU max memory so it sounds like some of your GPU memory is already been allocated to something else and the allocation failed.

\n\n

You can do something like below to avoid allocating so much memory

\n\n
config = tf.ConfigProto(log_device_placement=True)\nconfig.gpu_options.per_process_gpu_memory_fraction=0.3 # don't hog all vRAM\nconfig.operation_timeout_in_ms=15000   # terminate on long hangs\nsess = tf.InteractiveSession(\"\", config=config)\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow: Opening log data written by SummaryWriter", + "input": "", + "output": "

As of March 2017, the EventAccumulator tool has been moved from Tensorflow core to the Tensorboard Backend. You can still use it to extract data from Tensorboard log files as follows:

\n\n
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator\nevent_acc = EventAccumulator('/path/to/summary/folder')\nevent_acc.Reload()\n# Show all tags in the log file\nprint(event_acc.Tags())\n\n# E. g. get wall clock, number of steps and value for a scalar 'Accuracy'\nw_times, step_nums, vals = zip(*event_acc.Scalars('Accuracy'))\n
\n", + "system": "" + }, + { + "instruction": "What is the TensorFlow checkpoint meta file?", + "input": "", + "output": "

This file contains a serialized MetaGraphDef protocol buffer. The MetaGraphDef is designed as a serialization format that includes all of the information required to restore a training or inference process (including the GraphDef that describes the dataflow, and additional annotations that describe the variables, input pipelines, and other relevant information). For example, the MetaGraphDef is used by TensorFlow Serving to start an inference service based on your trained model. We are investigating other tools that could use the MetaGraphDef for training.

\n\n

Assuming that you still have the Python code for your model, you do not need the MetaGraphDef to restore the model, because you can reconstruct all of the information in the MetaGraphDef by re-executing the Python code that builds the model. To restore from a checkpoint, you only need the checkpoint files that contain the trained weights, which are written periodically to the same directory.

\n", + "system": "" + }, + { + "instruction": "How can I run a loop with a tensor as its range? (in tensorflow)", + "input": "", + "output": "

To do this you will need to use the tensorflow while loop (tf.while_loop) as follows:

\n\n
i = tf.constant(0)\nwhile_condition = lambda i: tf.less(i, input_placeholder[1, 1])\ndef body(i):\n    # do something here which you want to do in your loop\n    # increment i\n    return [tf.add(i, 1)]\n\n# do the loop:\nr = tf.while_loop(while_condition, body, [i])\n
\n", + "system": "" + }, + { + "instruction": "Why is Tensorflow 100x slower than convnetjs in this simple NN example?", + "input": "", + "output": "

There could be many reasons why:

\n\n\n\n

The real benefits of Tensorflow will come when the distributed version will be public. Then the ability to run big networks on many nodes will be more important than the speed of a single node.

\n", + "system": "" + }, + { + "instruction": "Is it possible to modify an existing TensorFlow computation graph?", + "input": "", + "output": "

The TensorFlow tf.Graph class is an append-only data structure, which means that you can add nodes to the graph after executing part of the graph, but you cannot remove or modify existing nodes. Since TensorFlow executes only the necessary subgraph when you call Session.run(), there is no execution-time cost to having redundant nodes in the graph (although they will continue to consume memory).

\n\n

To remove all nodes in the graph, you can create a session with a new graph:

\n\n
with tf.Graph().as_default():  # Create a new graph, and make it the default.\n  with tf.Session() as sess:  # `sess` will use the new, currently empty, graph.\n    # Build graph and execute nodes in here.\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow Training", + "input": "", + "output": "

In the first training version, you are training the entire batch of training data at once, which means that the first and the 3000th element of spec_train will be processed using the same model parameters in a single step. This is known as (Batch) Gradient Descent.

\n\n

In the second training version, you are training a single example from the training data at once, which means that the 3000th element of spec_train will be processed using model parameters that have been updated 2999 times since the first element was most recently processed. This is known as Stochastic Gradient Descent (or it would be if the element was selected at random).

\n\n

In general, TensorFlow is used with datasets that are too large to process in one batch, so mini-batch SGD (where a subset of the examples are processed in one step) is favored. Processing a single element at a time is theoretically desirable, but is inherently sequential and has high fixed costs because the matrix multiplications and other operations are not as computationally dense. Therefore, processing a small batch (e.g. 32 or 128) of examples at once is the usual approach, with multiple replicas training on different batches in parallel.

\n\n

See this Stats StackExchange question for a more theoretical discussion of when you should use one approach versus the other.

\n", + "system": "" + }, + { + "instruction": "Tensorflow indexing with boolean tensor", + "input": "", + "output": "

Try:

\n\n
ones = tf.ones_like(x) # create a tensor all ones\nmask = tf.greater(x, ones) # boolean tensor, mask[i] = True iff x[i] > 1\nslice_y_greater_than_one = tf.boolean_mask(y, mask)\n
\n\n

See tf.boolean_mask

\n\n

EDIT: another (better ?) way to do it:

\n\n
import tensorflow as tf\n\nx = tf.constant([1, 2, 0, 4])\ny = tf.Variable([1, 2, 0, 4])\nmask = x > 1\nslice_y_greater_than_one = tf.boolean_mask(y, mask)\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    print (sess.run(slice_y_greater_than_one)) # [2 4]\n
\n", + "system": "" + }, + { + "instruction": "How can I copy a variable in tensorflow", + "input": "", + "output": "

You asked how to copy a variable in the title, but how to copy a tensor in the question. Let's look at the different possible answers.

\n\n

(1) You want to create a tensor that has the same value that is currently stored in a variable that we'll call var.

\n\n
tensor = tf.identity(var)\n
\n\n

But remember, 'tensor' is a graph node that will have that value when evaluated, and any time you evaluate it, it will grab the current value of var. You can play around with control flow ops such as with_dependencies() to see the ordering of updates to the variable and the timing of the identity.

\n\n

(2) You want to create another variable and set its value to the value currently stored in a variable:

\n\n
import tensorflow as tf\nvar = tf.Variable(0.9)\nvar2 = tf.Variable(0.0)\ncopy_first_variable = var2.assign(var)\ninit = tf.initialize_all_variables()\nsess = tf.Session()\n\nsess.run(init)\n\nprint sess.run(var2)\nsess.run(copy_first_variable)\nprint sess.run(var2)\n
\n\n

(3) You want to define a variable and set its starting value to the same thing you already initialized a variable to (this is what nivwu.. above answered):

\n\n
var2 = tf.Variable(var.initialized_value())\n
\n\n

var2 will get initialized when you call tf.initialize_all_variables. You can't use this to copy var after you've already initialized the graph and started running things.

\n", + "system": "" + }, + { + "instruction": "TensorFlow on 32-bit Linux?", + "input": "", + "output": "

We have only tested the TensorFlow distribution on 64-bit Linux and Mac OS X, and distribute binary packages for those platforms only. Try following the source installation instructions to build a version for your platform.

\n\n

EDIT: One user has published instructions for running TensorFlow on a 32-bit ARM processor, which is promising for other 32-bit architectures. These instructions may have useful pointers for getting TensorFlow and Bazel to work in a 32-bit environment.

\n", + "system": "" + }, + { + "instruction": "Neither PyTorch nor TensorFlow >= 2.0 have been found.Models won't be available and only tokenizers, configuration and file/data utilities can be used", + "input": "", + "output": "

You need one of them PyTorch or Tensorflow.

\n

You can check if tensorflow is installed or you can reinstall it

\n
    \n
  1. pip uninstall tensorflow
  2. \n
  3. pip install tensorflow==2.2.0(you can install only tensorflow it worked same as tensorflow-gpu)
  4. \n
  5. pip uninstall transformers
  6. \n
  7. pip install transformers==3.3.1
  8. \n
\n

If this doesn't solve it, try to upgrade your python to 3.7.8

\n", + "system": "" + }, + { + "instruction": "How to run tensorflow with gpu support in docker-compose?", + "input": "", + "output": "

I agree that installing all tensorflow-gpu dependencies is rather painful. Fortunately, it's rather easy with Docker, as you only need NVIDIA Driver and NVIDIA Container Toolkit (a sort of a plugin). The rest (CUDA, cuDNN) Tensorflow images have inside, so you don't need them on the Docker host.

\n

The driver can be deployed as a container too, but I do not recommend that for a workstation. It is meant to be used on servers where there is no GUI (X-server, etc). The subject of containerized driver is covered at the end of this post, for now let's see how to start tensorflow-gpu with docker-compose. The process is the same regardless of whether you have the driver in container or not.

\n

How to launch Tensorflow-GPU with docker-compose

\n

Prerequisites:

\n\n

To enable GPU support for a container you need to create the container with NVIDIA Container Toolkit. There are two ways you can do that:

\n
    \n
  1. You can configure Docker to always use nvidia container runtime. It is fine to do so as it works just as the default runtime unless some NVIDIA-specific environment variables are present (more on that later). This is done by placing "default-runtime": "nvidia" into Docker's daemon.json:
  2. \n
\n

/etc/docker/daemon.json:

\n
{\n  "runtimes": {\n      "nvidia": {\n          "path": "/usr/bin/nvidia-container-runtime",\n          "runtimeArgs": []\n      }\n  },\n  "default-runtime": "nvidia"\n}\n
\n
    \n
  1. You can select the runtime during container creation. With docker-compose it is only possible with format version 2.3.
  2. \n
\n

Here is a sample docker-compose.yml to launch Tensorflow with GPU:

\n
version: "2.3"  # the only version where 'runtime' option is supported\n\nservices:\n  test:\n    image: tensorflow/tensorflow:2.3.0-gpu\n    # Make Docker create the container with NVIDIA Container Toolkit\n    # You don't need it if you set 'nvidia' as the default runtime in\n    # daemon.json.\n    runtime: nvidia\n    # the lines below are here just to test that TF can see GPUs\n    entrypoint:\n      - /usr/local/bin/python\n      - -c\n    command:\n      - "import tensorflow as tf; tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)"\n
\n

By running this with docker-compose up you should see a line with the GPU specs in it. It appears at the end and looks like this:

\n
\n

test_1 | 2021-01-23 11:02:46.500189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/device:GPU:0 with 1624 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)

\n
\n

And that is all you need to launch an official Tensorflow image with GPU.

\n

NVIDIA Environment Variables and custom images

\n

As I mentioned, NVIDIA Container Toolkit works as the default runtime unless some variables are present. These are listed and explained here. You only need to care about them if you build a custom image and want to enable GPU support in it. Official Tensorflow images with GPU have them inherited from CUDA images they use a base, so you only need to start the image with the right runtime as in the example above.

\n

If you are interested in customising a Tensorflow image, I wrote another post on that.

\n

Host Configuration for NVIDIA driver in container

\n

As mentioned in the beginning, this is not something you want on a workstation. The process require you to start the driver container when no other display driver is loaded (that is via SSH, for example). Furthermore, at the moment of writing only Ubuntu 16.04, Ubuntu 18.04 and Centos 7 were supported.

\n

There is an official guide and below are extractions from it for Ubuntu 18.04.

\n
    \n
  1. Edit 'root' option in NVIDIA Container Toolkit settings:
  2. \n
\n
sudo sed -i 's/^#root/root/' /etc/nvidia-container-runtime/config.toml\n
\n
    \n
  1. Disable the Nouveau driver modules:
  2. \n
\n
sudo tee /etc/modules-load.d/ipmi.conf <<< "ipmi_msghandler" \\\n  && sudo tee /etc/modprobe.d/blacklist-nouveau.conf <<< "blacklist nouveau" \\\n  && sudo tee -a /etc/modprobe.d/blacklist-nouveau.conf <<< "options nouveau modeset=0"\n
\n

If you are using an AWS kernel, ensure that the i2c_core kernel module is enabled:

\n
sudo tee /etc/modules-load.d/ipmi.conf <<< "i2c_core"\n
\n
    \n
  1. Update the initramfs:
  2. \n
\n
sudo update-initramfs -u\n
\n

Now it's time to reboot for the changes to take place. After reboot check that no nouveau or nvidia modules are loaded. The commands below should return nothing:

\n
lsmod | grep nouveau\nlsmod | grep nvidia\n
\n

Starting driver in container

\n

The guide offers a command to run the driver, I prefer docker-compose. Save the following as driver.yml:

\n
version: "3.0"\nservices:\n  driver:\n    image: nvidia/driver:450.80.02-ubuntu18.04\n    privileged: true\n    restart: unless-stopped\n    volumes:\n    - /run/nvidia:/run/nvidia:shared\n    - /var/log:/var/log\n    pid: "host"\n    container_name: nvidia-driver\n
\n

Use docker-compose -f driver.yml up -d to start the driver container. It will take a couple of minutes to compile modules for your kernel. You may use docker logs nvidia-driver -f to overview the process, wait for 'Done, now waiting for signal' line to appear. Otherwise use lsmod | grep nvidia to see if the driver modules are loaded. When it's ready you should see something like this:

\n
nvidia_modeset       1183744  0\nnvidia_uvm            970752  0\nnvidia              19722240  17 nvidia_uvm,nvidia_modeset\n
\n", + "system": "" + }, + { + "instruction": "How to improve data input pipeline performance?", + "input": "", + "output": "

Mentioning the Solution and the Important observations of @AlexisBRENON in the Answer Section, for the benefit of the Community.

\n\n

Below mentioned are the Important Observations:

\n\n
    \n
  1. According to this GitHub issue, the TFRecordDataset interleaving is a legacy one, so interleave function is better.
  2. \n
  3. batch before map is a good habit (vectorizing your function) and reduce the number of times the mapped function is called.
  4. \n
  5. No need of repeat anymore. Since TF2.0, the Keras model API supports the dataset API and can use cache (see the SO post)
  6. \n
  7. Switch from a VarLenFeature to a FixedLenSequenceFeature, removing a useless call to tf.sparse.to_dense.
  8. \n
\n\n

Code for the Pipeline, with improved performance, in line with above observations is mentioned below:

\n\n
def build_dataset(file_pattern):\n    tf.data.Dataset.list_files(\n        file_pattern\n    ).interleave(\n        TFRecordDataset,\n        cycle_length=tf.data.experimental.AUTOTUNE,\n        num_parallel_calls=tf.data.experimental.AUTOTUNE\n    ).shuffle(\n        2048\n    ).batch(\n        batch_size=64,\n        drop_remainder=True,\n    ).map(\n        map_func=parse_examples_batch,\n        num_parallel_calls=tf.data.experimental.AUTOTUNE\n    ).cache(\n    ).prefetch(\n        tf.data.experimental.AUTOTUNE\n    )\n\ndef parse_examples_batch(examples):\n    preprocessed_sample_columns = {\n        \"features\": tf.io.FixedLenSequenceFeature((), tf.float32, allow_missing=True),\n        \"booleanFeatures\": tf.io.FixedLenFeature((), tf.string, \"\"),\n        \"label\": tf.io.FixedLenFeature((), tf.float32, -1)\n    }\n    samples = tf.io.parse_example(examples, preprocessed_sample_columns)\n    bits_to_float = tf.io.decode_raw(samples[\"booleanFeatures\"], tf.uint8)\n    return (\n        (samples['features'], bits_to_float),\n        tf.expand_dims(samples[\"label\"], 1)\n    )\n
\n", + "system": "" + }, + { + "instruction": "How to restore Tensorflow model from .pb file in python?", + "input": "", + "output": "

The following code will read the model and print out the names of the nodes in the graph.

\n\n
import tensorflow as tf\nfrom tensorflow.python.platform import gfile\nGRAPH_PB_PATH = './frozen_model.pb'\nwith tf.Session() as sess:\n   print(\"load graph\")\n   with gfile.FastGFile(GRAPH_PB_PATH,'rb') as f:\n       graph_def = tf.GraphDef()\n   graph_def.ParseFromString(f.read())\n   sess.graph.as_default()\n   tf.import_graph_def(graph_def, name='')\n   graph_nodes=[n for n in graph_def.node]\n   names = []\n   for t in graph_nodes:\n      names.append(t.name)\n   print(names)\n
\n\n

You are freezing the graph properly that is why you are getting different results basically weights are not getting stored in your model. You can use the freeze_graph.py (link) for getting a correctly stored graph.

\n", + "system": "" + }, + { + "instruction": "Can not use both bias and batch normalization in convolution layers", + "input": "", + "output": "

Batchnormalization already includes the addition of the bias term. Recap that BatchNorm is already:

\n\n
gamma * normalized(x) + bias\n
\n\n

So there is no need (and it makes no sense) to add another bias term in the convolution layer. Simply speaking BatchNorm shifts the activation by their mean values. Hence, any constant will be canceled out.

\n\n

If you still want to do this, you need to remove the normalizer_fn argument and add BatchNorm as a single layer. Like I said, this makes no sense.

\n\n

But the solution would be something like

\n\n
net = slim.conv2d(net, normalizer_fn=None, ...)\nnet = tf.nn.batch_normalization(net)\n
\n\n

Note, the BatchNorm relies on non-gradient updates. So you either need to use an optimizer which is compatible with the UPDATE_OPS collection. Or you need to manually add tf.control_dependencies.

\n\n

Long story short: Even if you implement the ConvWithBias+BatchNorm, it will behave like ConvWithoutBias+BatchNorm. It is the same as multiple fully-connected layers without activation function will behave like a single one.

\n", + "system": "" + }, + { + "instruction": "How can I use tensorflow serving for multiple models", + "input": "", + "output": "

Built a docker image from official tensorflow serving docker file

\n\n

Then inside docker image.

\n\n
/usr/local/bin/tensorflow_model_server --port=9000 --model_config_file=/serving/models.conf\n
\n\n

here /serving/models.conf is a similar file as yours.

\n", + "system": "" + }, + { + "instruction": "TensorFlow: Is there a way to measure FLOPS for a model?", + "input": "", + "output": "

I would like to build on Tobias Schnek's answer as well as answering the original question: how to get FLOP from a pb file.

\n\n

Running the first snippet of code from Tobias answer with TensorFlow 1.6.0

\n\n
g = tf.Graph()\nrun_meta = tf.RunMetadata()\nwith g.as_default():\n    A = tf.Variable(tf.random_normal([25,16]))\n    B = tf.Variable(tf.random_normal([16,9]))\n    C = tf.matmul(A,B)\n\n    opts = tf.profiler.ProfileOptionBuilder.float_operation()    \n    flops = tf.profiler.profile(g, run_meta=run_meta, cmd='op', options=opts)\n    if flops is not None:\n        print('Flops should be ~',2*25*16*9)\n        print('TF stats gives',flops.total_float_ops)\n
\n\n

We get the following ouput:

\n\n
Flops should be ~ 7200\nTF stats gives 8288\n
\n\n

So, why do we get 8288 instead of the expected result 7200=2*25*16*9[a]? The answer is in the way the tensors A and B are initialised. Initialising with a Gaussian distribution costs some FLOP. Changing the definition of A and B by

\n\n
    A = tf.Variable(initial_value=tf.zeros([25, 16]))\n    B = tf.Variable(initial_value=tf.zeros([16, 9]))\n
\n\n

gives the expected output 7200.

\n\n

Usually, a network's variables are initialised with Gaussian distributions among other schemes. Most of the time, we are not interested by the initialisation FLOP as they are done once during initialisation and do not happen during the training nor the inference. So, how could one get the exact number of FLOP disregarding the initialisation FLOP?

\n\n

Freeze the graph with a pb. Calculating the FLOP from a pb file was, actually, the OP's use case.

\n\n

The following snippet illustrates this:

\n\n
import tensorflow as tf\nfrom tensorflow.python.framework import graph_util\n\ndef load_pb(pb):\n    with tf.gfile.GFile(pb, \"rb\") as f:\n        graph_def = tf.GraphDef()\n        graph_def.ParseFromString(f.read())\n    with tf.Graph().as_default() as graph:\n        tf.import_graph_def(graph_def, name='')\n        return graph\n\n# ***** (1) Create Graph *****\ng = tf.Graph()\nsess = tf.Session(graph=g)\nwith g.as_default():\n    A = tf.Variable(initial_value=tf.random_normal([25, 16]))\n    B = tf.Variable(initial_value=tf.random_normal([16, 9]))\n    C = tf.matmul(A, B, name='output')\n    sess.run(tf.global_variables_initializer())\n    flops = tf.profiler.profile(g, options = tf.profiler.ProfileOptionBuilder.float_operation())\n    print('FLOP before freezing', flops.total_float_ops)\n# *****************************        \n\n# ***** (2) freeze graph *****\noutput_graph_def = graph_util.convert_variables_to_constants(sess, g.as_graph_def(), ['output'])\n\nwith tf.gfile.GFile('graph.pb', \"wb\") as f:\n    f.write(output_graph_def.SerializeToString())\n# *****************************\n\n\n# ***** (3) Load frozen graph *****\ng2 = load_pb('./graph.pb')\nwith g2.as_default():\n    flops = tf.profiler.profile(g2, options = tf.profiler.ProfileOptionBuilder.float_operation())\n    print('FLOP after freezing', flops.total_float_ops)\n
\n\n

outputs

\n\n
FLOP before freezing 8288\nFLOP after freezing 7200\n
\n\n
\n\n

[a] Usually the FLOP of a matrix multiplication are mq(2p -1) for the product AB where A[m, p] and B[p, q] but TensorFlow returns 2mpq for some reason. An issue has been opened to understand why.

\n", + "system": "" + }, + { + "instruction": "When global_variables_initializer() is actually required", + "input": "", + "output": "

tf.global_variables_initializer is a shortcut to initialize all global variables. It is not required, and you can use other ways to initialize your variables or in case of easy scripts sometimes you do not need to initialize them at all.

\n\n

Everything except of variables do not require initialization (constants and placeholders). But every used variable (even if it is a constant) should be initialized. This will give you an error, although z is just 0-d tensor with only one number.

\n\n
import tensorflow as tf\nz = tf.Variable(4)\nwith tf.Session() as session:\n        print(session.run(z)) \n
\n\n

I highlighted the word used, because if you just have variables which are not run (or non of the runs depends on them) you do not need to initialize them.

\n\n
\n\n

For example this code will execute without any problems, nonetheless it has 2 variables and one operation which depends on them. But the run does not require them.

\n\n
import tensorflow as tf\nx = tf.constant(35, name='x')\ny = tf.Variable(x + 5, name='y')\nz = tf.Variable(4)\na = y + z\nwith tf.Session() as session:\n        print(\"x = \", session.run(x)) \n
\n", + "system": "" + }, + { + "instruction": "how to install tensorflow on anaconda python 3.6", + "input": "", + "output": "

UPDATE: TensorFlow supports Python 3.6 on Windows since version 1.2.0 (see the release notes)

\n\n
\n\n

TensorFlow only supports Python 3.5 64-bit as of now. Support for Python 3.6 is a work in progress and you can track it here as well as chime in the discussion.

\n\n

The only alternative to use Python 3.6 with TensorFlow on Windows currently is building TF from source.

\n\n

If you don't want to uninstall your Anaconda distribution for Python 3.6 and install a previous release you can create a conda environment for Python=3.5 as in:\n\nconda create --name tensorflow python=3.5\nactivate tensorflow\npip install tensorflow-gpu\n

\n", + "system": "" + }, + { + "instruction": "Tensorflow - ValueError: Parent directory of trained_variables.ckpt doesn't exist, can't save", + "input": "", + "output": "
saver.save(sess, \"./trained_variables.ckpt\")\n
\n", + "system": "" + }, + { + "instruction": "Python: rewrite a looping numpy math function to run on GPU", + "input": "", + "output": "

Introduction and solution code

\n\n

Well, you asked for it! So, listed in this post is an implementation with PyCUDA that uses lightweight wrappers extending most of CUDA's capabilities within Python environment. We will its SourceModule functionality that lets us write and compile CUDA kernels staying in Python environment.

\n\n

Getting to the problem at hand, among the computations involved, we have sliding maximum and minimum, few differences and divisions and comparisons. For the maximum and minimum parts, that involves block max finding (for each sliding window), we will use reduction-technique as discussed in some detail here. This would be done at block level. For the upper level iterations across sliding windows, we would use the grid level indexing into CUDA resources. For more info on this block and grid format, please refer to page-18. PyCUDA also supports builtins for computing reductions like max and min, but we lose control, specifically we intend to use specialized memory like shared and constant memory for leveraging GPU at its near to optimum level.

\n\n

Listing out the PyCUDA-NumPy solution code -

\n\n

1] PyCUDA part -

\n\n
import pycuda.autoinit\nimport pycuda.driver as drv\nimport numpy as np\nfrom pycuda.compiler import SourceModule\n\nmod = SourceModule(\"\"\"\n#define TBP 1024 // THREADS_PER_BLOCK\n\n__device__ void get_Bmax_Cmin(float* out, float *d1, float *d2, int L, int offset)\n{\n    int tid = threadIdx.x;\n    int inv = TBP;\n    __shared__ float dS[TBP][2];\n\n    dS[tid][0] = d1[tid+offset];  \n    dS[tid][1] = d2[tid+offset];         \n    __syncthreads();\n\n    if(tid<L-TBP)  \n    {\n        dS[tid][0] = fmaxf(dS[tid][0] , d1[tid+inv+offset]);\n        dS[tid][1] = fminf(dS[tid][1] , d2[tid+inv+offset]);\n    }\n    __syncthreads();\n    inv = inv/2;\n\n    while(inv!=0)   \n    {\n        if(tid<inv)\n        {\n            dS[tid][0] = fmaxf(dS[tid][0] , dS[tid+inv][0]);\n            dS[tid][1] = fminf(dS[tid][1] , dS[tid+inv][1]);\n        }\n        __syncthreads();\n        inv = inv/2;\n    }\n    __syncthreads();\n\n    if(tid==0)\n    {\n        out[0] = dS[0][0];\n        out[1] = dS[0][1];\n    }   \n    __syncthreads();\n}\n\n__global__ void main1(float* out, float *d0, float *d1, float *d2, float *d3, float *lowL, float *highL, int *BLOCKLEN)\n{\n    int L = BLOCKLEN[0];\n    int tid = threadIdx.x;\n    int iterID = blockIdx.x;\n    float Bmax_Cmin[2];\n    int inv;\n    float Cmin, dif;   \n    __shared__ float dS[TBP*2];   \n\n    get_Bmax_Cmin(Bmax_Cmin, d1, d2, L, iterID);  \n    Cmin = Bmax_Cmin[1];\n    dif = (Bmax_Cmin[0] - Cmin);\n\n    inv = TBP;\n\n    dS[tid] = (d0[tid+iterID] + d1[tid+iterID] + d2[tid+iterID] + d3[tid+iterID] - 4.0*Cmin) / (4.0*dif);\n    __syncthreads();\n\n    if(tid<L-TBP)  \n        dS[tid+inv] = (d0[tid+inv+iterID] + d1[tid+inv+iterID] + d2[tid+inv+iterID] + d3[tid+inv+iterID] - 4.0*Cmin) / (4.0*dif);                   \n\n     dS[tid] = ((dS[tid] >= lowL[tid]) & (dS[tid] <= highL[tid])) ? 1 : 0;\n     __syncthreads();\n\n     if(tid<L-TBP)\n         dS[tid] += ((dS[tid+inv] >= lowL[tid+inv]) & (dS[tid+inv] <= highL[tid+inv])) ? 1 : 0;\n     __syncthreads();\n\n    inv = inv/2;\n    while(inv!=0)   \n    {\n        if(tid<inv)\n            dS[tid] += dS[tid+inv];\n        __syncthreads();\n        inv = inv/2;\n    }\n\n    if(tid==0)\n        out[iterID] = dS[0];\n    __syncthreads();\n\n}\n\"\"\")\n
\n\n

Please note that THREADS_PER_BLOCK, TBP is to be set based on the batchSize. The rule of thumb here is to assign power of 2 value to TBP that is just lesser than batchSize. Thus, for batchSize = 2000, we needed TBP as 1024.

\n\n

2] NumPy part -

\n\n
def gpu_app_v1(A, B, C, D, batchSize, minimumLimit):\n    func1 = mod.get_function(\"main1\")\n    outlen = len(A)-batchSize+1\n\n    # Set block and grid sizes\n    BSZ = (1024,1,1)\n    GSZ = (outlen,1)\n\n    dest = np.zeros(outlen).astype(np.float32)\n    N = np.int32(batchSize)\n    func1(drv.Out(dest), drv.In(A), drv.In(B), drv.In(C), drv.In(D), \\\n                     drv.In(data2b), drv.In(data2a),\\\n                     drv.In(N), block=BSZ, grid=GSZ)\n    idx = np.flatnonzero(dest >= minimumLimit)\n    return idx, dest[idx]\n
\n\n

Benchmarking

\n\n

I have tested on GTX 960M. Please note that PyCUDA expects arrays to be of contiguous order. So, we need to slice the columns and make copies. I am expecting/assuming that the data could be read from the files such that the data is spread along rows instead of being as columns. Thus, keeping those out of the benchmarking function for now.

\n\n

Original approach -

\n\n
def org_app(data1, batchSize, minimumLimit):\n    resultArray = []\n    for rowNr in  range(data1.shape[0]-batchSize+1):\n        tmp_df = data1[rowNr:rowNr + batchSize] #rolling window\n        result = doTheMath(tmp_df, data2a, data2b)\n        if (result >= minimumLimit):\n            resultArray.append([rowNr , result]) \n    return resultArray\n
\n\n

Timings and verification -

\n\n
In [2]: #Declare variables\n   ...: batchSize = 2000\n   ...: sampleSize = 50000\n   ...: resultArray = []\n   ...: minimumLimit = 490 #use 400 on the real sample data\n   ...: \n   ...: #Create Random Sample Data\n   ...: data1 = np.random.uniform(1, 100000, (sampleSize + batchSize, 4)).astype(np.float32)\n   ...: data2b = np.random.uniform(0, 1, (batchSize)).astype(np.float32)\n   ...: data2a = data2b + np.random.uniform(0, 1, (batchSize)).astype(np.float32)\n   ...: \n   ...: # Make column copies\n   ...: A = data1[:,0].copy()\n   ...: B = data1[:,1].copy()\n   ...: C = data1[:,2].copy()\n   ...: D = data1[:,3].copy()\n   ...: \n   ...: gpu_out1,gpu_out2 = gpu_app_v1(A, B, C, D, batchSize, minimumLimit)\n   ...: cpu_out1,cpu_out2 = np.array(org_app(data1, batchSize, minimumLimit)).T\n   ...: print(np.allclose(gpu_out1, cpu_out1))\n   ...: print(np.allclose(gpu_out2, cpu_out2))\n   ...: \nTrue\nFalse\n
\n\n

So, there's some differences between CPU and GPU countings. Let's investigate them -

\n\n
In [7]: idx = np.flatnonzero(~np.isclose(gpu_out2, cpu_out2))\n\nIn [8]: idx\nOut[8]: array([12776, 15208, 17620, 18326])\n\nIn [9]: gpu_out2[idx] - cpu_out2[idx]\nOut[9]: array([-1., -1.,  1.,  1.])\n
\n\n

There are four instances of non-matching counts. These are off at max by 1. Upon research, I came across some information on this. Basically, since we are using math intrinsics for max and min computations and those I think are causing the last binary bit in the floating pt representation to be diferent than the CPU counterpart. This is termed as ULP error and has been discused in detail here and here.

\n\n

Finally, puting the issue aside, let's get to the most important bit, the performance -

\n\n
In [10]: %timeit org_app(data1, batchSize, minimumLimit)\n1 loops, best of 3: 2.18 s per loop\n\nIn [11]: %timeit gpu_app_v1(A, B, C, D, batchSize, minimumLimit)\n10 loops, best of 3: 82.5 ms per loop\n\nIn [12]: 2180.0/82.5\nOut[12]: 26.424242424242426\n
\n\n

Let's try with bigger datasets. With sampleSize = 500000, we get -

\n\n
In [14]: %timeit org_app(data1, batchSize, minimumLimit)\n1 loops, best of 3: 23.2 s per loop\n\nIn [15]: %timeit gpu_app_v1(A, B, C, D, batchSize, minimumLimit)\n1 loops, best of 3: 821 ms per loop\n\nIn [16]: 23200.0/821\nOut[16]: 28.25822168087698\n
\n\n

So, the speedup stays constant at around 27.

\n\n

Limitations :

\n\n

1) We are using float32 numbers, as GPUs work best with those. Double precision specially on non-server GPUs aren't popular when it comes to performance and since you are working with such a GPU, I tested with float32.

\n\n

Further improvement :

\n\n

1) We could use faster constant memory to feed in data2a and data2b, rather than use global memory.

\n", + "system": "" + }, + { + "instruction": "How to initialise only optimizer variables in Tensorflow?", + "input": "", + "output": "

Both current answers kinda work by filtering the variable name using the 'Momentum' string. But that is very brittle on two sides:

\n\n
    \n
  1. It could silently (re-)initialize some other variables you don't actually want to reset! Either simply because of a name-clash, or because you have a more complex graph and optimize different parts separately, for example.
  2. \n
  3. It will only work for one specific optimizer, and how do you know the names to look out for for others?
  4. \n
  5. Bonus: an update to tensorflow might silently break your code.
  6. \n
\n\n

Fortunately, tensorflow's abstract Optimizer class has a mechanism for that, these extra optimizer variables are called \"slots\", and you can get all slot names of an optimizer using the get_slot_names() method:

\n\n
opt = tf.train.MomentumOptimizer(...)\nprint(opt.get_slot_names())\n# prints ['momentum']\n
\n\n

And you can get the variable corresponding to the slot for a specific (trainable) variable v using the get_slot(var, slot_name) method:

\n\n
opt.get_slot(some_var, 'momentum')\n
\n\n

Putting all this together, you can create an op that initializes the optimizer's state as follows:

\n\n
var_list = # list of vars to optimize, e.g. \n           # tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)\nopt = tf.train.MomentumOptimizer(0.1, 0.95)\nstep_op = opt.minimize(loss, var_list=var_list)\nreset_opt_op = tf.variables_initializer([opt.get_slot(var, name) for name in opt.get_slot_names() for var in var_list])\n
\n\n

This will really only reset the correct variables, and be robust across optimizers.

\n\n

Except for one unfortunate caveat: AdamOptimizer. That one also keeps a counter for how often it's been called. That means you should really think hard about what you're doing here anyways, but for completeness' sake, you can get its extra states as opt._get_beta_accumulators(). The returned list should be added to the list in the above reset_opt_op line.

\n", + "system": "" + }, + { + "instruction": "In TensorFlow,what's the meaning of ":0" in a Variable's name?", + "input": "", + "output": "

It has to do with representation of tensors in underlying API. A tensor is a value associated with output of some op. In case of variables, there's a Variable op with one output. An op can have more than one output, so those tensors get referenced to as <op>:0, <op>:1 etc. For instance if you use tf.nn.top_k, there are two values created by this op, so you may see TopKV2:0 and TopKV2:1

\n\n
a,b=tf.nn.top_k([1], 1)\nprint a.name # => 'TopKV2:0'\nprint b.name # => 'TopKV2:1'\n
\n\n

How to understand the term `tensor` in TensorFlow?

\n", + "system": "" + }, + { + "instruction": "Understanding Tensorflow LSTM Input shape", + "input": "", + "output": "

The documentation of tf.nn.dynamic_rnn states:

\n\n
\n

inputs: The RNN inputs. If time_major == False (default), this must be a Tensor of shape: [batch_size, max_time, ...], or a nested tuple of such elements.

\n
\n\n

In your case, this means that the input should have a shape of [batch_size, 10, 2]. Instead of training on all 4000 sequences at once, you'd use only batch_size many of them in each training iteration. Something like the following should work (added reshape for clarity):

\n\n
batch_size = 32\n# batch_size sequences of length 10 with 2 values for each timestep\ninput = get_batch(X, batch_size).reshape([batch_size, 10, 2])\n# Create LSTM cell with state size 256. Could also use GRUCell, ...\n# Note: state_is_tuple=False is deprecated;\n# the option might be completely removed in the future\ncell = tf.nn.rnn_cell.LSTMCell(256, state_is_tuple=True)\noutputs, state = tf.nn.dynamic_rnn(cell,\n                                   input,\n                                   sequence_length=[10]*batch_size,\n                                   dtype=tf.float32)\n
\n\n

From the documentation, outputs will be of shape [batch_size, 10, 256], i.e. one 256-output for each timestep. state will be a tuple of shapes [batch_size, 256]. You could predict your final value, one for each sequence, from that:

\n\n
predictions = tf.contrib.layers.fully_connected(state.h,\n                                                num_outputs=1,\n                                                activation_fn=None)\nloss = get_loss(get_batch(Y).reshape([batch_size, 1]), predictions)\n
\n\n

The number 256 in the shapes of outputs and state is determined by cell.output_size resp. cell.state_size. When creating the LSTMCell like above, these are the same. Also see the LSTMCell documentation.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: Cuda compute capability 3.0. The minimum required Cuda capability is 3.5", + "input": "", + "output": "

I have installed Tensorflow revision 1.8. It recommends CUDA 9.0. I am using a GTX 650M card which has CUDA compute capability 3.0 and now works like a charm. OS is ubuntu 18.04. Below are detailed steps:

\n\n

Installing dependencies

\n\n

I have included ffmpeg and some related packages for my opencv 3.4 compilation, if not required do not install\nRun the below commands:

\n\n
sudo apt-get update \nsudo apt-get dist-upgrade -y\nsudo apt-get autoremove -y\nsudo apt-get upgrade\nsudo add-apt-repository ppa:jonathonf/ffmpeg-3 -y\nsudo apt-get update\nsudo apt-get install build-essential -y\nsudo apt-get install ffmpeg -y\nsudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev -y\nsudo apt-get install python-dev libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev -y\nsudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev -y\nsudo apt-get install libxvidcore-dev libx264-dev -y\nsudo apt-get install unzip qtbase5-dev python-dev python3-dev python-numpy python3-numpy -y\nsudo apt-get install libopencv-dev libgtk-3-dev libdc1394-22 libdc1394-22-dev libjpeg-dev libpng12-dev libtiff5-dev >libjasper-dev -y\nsudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libxine2-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev -y\nsudo apt-get install libv4l-dev libtbb-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev -y\nsudo apt-get install libvorbis-dev libxvidcore-dev v4l-utils vtk6 -y\nsudo apt-get install liblapacke-dev libopenblas-dev libgdal-dev checkinstall -y\nsudo apt-get install libgtk-3-dev -y\nsudo apt-get install libatlas-base-dev gfortran -y\nsudo apt-get install qt-sdk -y\nsudo apt-get install python2.7-dev python3.5-dev python-tk -y\nsudo apt-get install cython libgflags-dev -y\nsudo apt-get install tesseract-ocr -y\nsudo apt-get install tesseract-ocr-eng -y \nsudo apt-get install tesseract-ocr-ell -y\nsudo apt-get install gstreamer1.0-python3-plugin-loader -y\nsudo apt-get install libdc1394-22-dev -y\nsudo apt-get install openjdk-8-jdk\nsudo apt-get install pkg-config zip g++-6 gcc-6 zlib1g-dev unzip  git\nsudo wget https://bootstrap.pypa.io/get-pip.py\nsudo python get-pip.py\nsudo pip install -U pip\nsudo pip install -U numpy\nsudo pip install -U pandas\nsudo pip install -U wheel\nsudo pip install -U six\n
\n\n

Installing the nvidia driver

\n\n

Run the below commands:

\n\n
sudo add-apt-repository ppa:graphics-drivers/ppa\nsudo apt-get update\nsudo apt-get install nvidia-390 -y\n
\n\n

Reboot and run the below command and it should give you details as described in the image below:\n\"enter

\n\n

gcc-6 and g++-6 checks.

\n\n

gcc-6 and g++-6 is required for CUDA 9.0, run the below commands:

\n\n
cd /usr/bin \nsudo rm -rf gcc gcc-ar gcc-nm gcc-ranlib g++\nsudo ln -s gcc-6 gcc\nsudo ln -s gcc-ar-6 gcc-ar\nsudo ln -s gcc-nm-6 gcc-nm\nsudo ln -s gcc-ranlib-6 gcc-ranlib\nsudo ln -s g++-6 g++\n
\n\n

Installing CUDA 9.0

\n\n

Go to https://developer.nvidia.com/cuda-90-download-archive. Select options: Linux->x86_64->Ubuntu->17.04->deb(local).\nDownload the main file and the two patches.\nRun below commands:

\n\n
sudo dpkg -i cuda-repo-ubuntu1704-9-0-local_9.0.176-1_amd64.deb\nsudo apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub\nsudo apt-get update\nsudo apt-get install cuda\n
\n\n

Navigate to the first patch on your PC and double click it, it will automatically execute, follow same for second patch.

\n\n

Add below to lines to your ~/.bashrc file and give it a reboot:

\n\n
export PATH=/usr/local/cuda-9.0/bin${PATH:+:$PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}\n
\n\n

Installing cudnn 7.1.4 for CUDA 9.0

\n\n

Download the tar file from https://developer.nvidia.com/cudnn and extract it to your Downloads folder\nDownload requires a nvidia developed login, free sign-up\nRun the below commands:

\n\n
cd ~/Downloads/cudnn-9.0-linux-x64-v7.1/cuda\nsudo cp include/* /usr/local/cuda/include/\nsudo cp lib64/libcudnn.so.7.1.4 lib64/libcudnn_static.a /usr/local/cuda/lib64/\ncd /usr/lib/x86_64-linux-gnu\nsudo ln -s libcudnn.so.7.1.4 libcudnn.so.7\nsudo ln -s libcudnn.so.7 libcudnn.so\n
\n\n

Installing NCCL 2.2.12 for CUDA 9.0

\n\n

Download the tar file from https://developer.nvidia.com/nccl and extract it to your Downloads folder\nDownload requires a nvidia developed login, free sign-up\nRun the below commands:

\n\n
sudo mkdir -p /usr/local/cuda/nccl/lib /usr/local/cuda/nccl/include\ncd ~/Downloads/nccl-repo-ubuntu1604-2.2.12-ga-cuda9.0_1-1_amd64/\nsudo cp *.txt /usr/local/cuda/nccl\nsudo cp include/*.h /usr/include/\nsudo cp lib/libnccl.so.2.1.15 lib/libnccl_static.a /usr/lib/x86_64-linux-gnu/\nsudo ln -s /usr/include/nccl.h /usr/local/cuda/nccl/include/nccl.h\ncd /usr/lib/x86_64-linux-gnu\nsudo ln -s libnccl.so.2.1.15 libnccl.so.2\nsudo ln -s libnccl.so.2 libnccl.so\nfor i in libnccl*; do sudo ln -s /usr/lib/x86_64-linux-gnu/$i /usr/local/cuda/nccl/lib/$i; done\n
\n\n

Install Bazel (the recomended manual installation of bazel worked, for reference: https://docs.bazel.build/versions/master/install-ubuntu.html#install-with-installer-ubuntu)

\n\n

Download \"bazel-0.13.1-installer-darwin-x86_64.sh\" from https://github.com/bazelbuild/bazel/releases\nRun the below commands:

\n\n
chmod +x bazel-0.13.1-installer-darwin-x86_64.sh\n./bazel-0.13.1-installer-darwin-x86_64.sh --user\nexport PATH=\"$PATH:$HOME/bin\"\n
\n\n

Compiling Tensorflow

\n\n

We will compile with CUDA, with XLA JIT (oh yeah) and jemalloc as malloc support. So we enter yes for these things.\nRun the below command and answer to the queries as described for running configuration

\n\n
git clone https://github.com/tensorflow/tensorflow \ngit checkout r1.8\n./configure\nYou have bazel 0.13.0 installed.\nPlease specify the location of python. [Default is /usr/bin/python]:\nPlease input the desired Python library path to use.  Default is [/usr/local/lib/python2.7/dist-packages]\nDo you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: y\njemalloc as malloc support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: n\nNo Google Cloud Platform support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with Hadoop File System support? [Y/n]: n\nNo Hadoop File System support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: n\nNo Amazon S3 File System support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: n\nNo Apache Kafka Platform support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with XLA JIT support? [y/N]: y\nXLA JIT support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with GDR support? [y/N]: n\nNo GDR support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with VERBS support? [y/N]: n\nNo VERBS support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n\nNo OpenCL SYCL support will be enabled for TensorFlow.\nDo you wish to build TensorFlow with CUDA support? [y/N]: y\nCUDA support will be enabled for TensorFlow.\nPlease specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 9.0]:\nPlease specify the location where CUDA 9.1 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:\nPlease specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.1.4\nPlease specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:\nDo you wish to build TensorFlow with TensorRT support? [y/N]: n\nNo TensorRT support will be enabled for TensorFlow.\nPlease specify the NCCL version you want to use. [Leave empty to default to NCCL 1.3]: 2.2.12\nPlease specify the location where NCCL 2 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:/usr/local/cuda/nccl\nPlease specify a list of comma-separated Cuda compute capabilities you want to build with.\nYou can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.\nPlease note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.0]\nDo you want to use clang as CUDA compiler? [y/N]: n\nnvcc will be used as CUDA compiler.\nPlease specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/x86_64-linux-gnu-gcc-7]: /usr/bin/gcc-6\nDo you wish to build TensorFlow with MPI support? [y/N]: n\nNo MPI support will be enabled for TensorFlow.\nPlease specify optimization flags to use during compilation when bazel option \"--config=opt\" is specified [Default is -march=native]:\nWould you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n\nNot configuring the WORKSPACE for Android builds.\nPreconfigured Bazel build configs. You can use any of the below by adding \"--config=<>\" to your build command. See tools/bazel.rc for more details.\n --config=mkl          # Build with MKL support.\n\n --config=monolithic   # Config for mostly static monolithic build.\n\nConfiguration finished\n
\n\n

Now to compile tensorflow, run below command, this is super RAM consuming and will take time. You can remove \"--local_resources 2048,.5,1.0\" from below line if you have a lot of RAM or this will work on 2 GB of RAM

\n\n
bazel build --config=opt --config=cuda --local_resources 2048,.5,1.0 //tensorflow/tools/pip_package:build_pip_package\n
\n\n

Once the compilation is completed you will have thing appear as per the image below confirming it was a success\n\"enter

\n\n

Build the wheel file, run below:

\n\n
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg\n
\n\n

Install the generated wheel file using pip

\n\n
sudo pip install /tmp/tensorflow_pkg/tensorflow*.whl\n
\n\n

To explore on the devices now you can run tensorflow, below image is the showcase on ipython terminal

\n\n

\"enter

\n", + "system": "" + }, + { + "instruction": "What is the purpose of the tf.contrib module in Tensorflow?", + "input": "", + "output": "

In general, tf.contrib contains contributed code. It is meant to contain features and contributions that eventually should get merged into core TensorFlow, but whose interfaces may still change, or which require some testing to see whether they can find broader acceptance.

\n\n

The code in tf.contrib isn't supported by the Tensorflow team. It is included in the hope that it is helpful, but it might change or be removed at any time; there are no guarantees.

\n\n

The source of tf.contrib.layers.sparse_column_with_hash_bucket can be found at

\n\n

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/feature_column.py#L365

\n", + "system": "" + }, + { + "instruction": "In what order should we tune hyperparameters in Neural Networks?", + "input": "", + "output": "

My general order is:

\n\n
    \n
  1. Batch size, as it will largely affect the training time of future experiments.
  2. \n
  3. Architecture of the network:\n\n
  4. \n
  5. Rest (dropout, L2 reg, etc.)
  6. \n
\n\n

Dependencies:

\n\n

I'd assume that the optimal values of

\n\n\n\n

strongly depend on each other. I am not an expert on that field though.

\n\n

As for your hyperparameters:

\n\n\n", + "system": "" + }, + { + "instruction": "TensorFlow: Unpooling", + "input": "", + "output": "

I don't think there is an official unpooling layer yet which is frustrating because you have to use image resize (bilinear interpolation or nearest neighbor) which is like an average unpooling operation and it's reaaaly slow. Look at the tf api in the section 'image' and you will find it.

\n\n

Tensorflow has a maxpooling_with_argmax thing where you get you maxpooled output as well as the activation map which is nice as you could use it in an unpooling layer to preserve the 'lost' spacial information but it seems as there isn't such an unpooling operation that does it. I guess that they are planning to add it ... soon.

\n\n

Edit: I found some guy on google discuss a week ago who seems to have implemented something like this but I personally haven't tried it yet.\nhttps://github.com/ppwwyyxx/tensorpack/blob/master/tensorpack/models/pool.py#L66

\n", + "system": "" + }, + { + "instruction": "Avoid tensorflow print on standard error", + "input": "", + "output": "

This was recently fixed, and should be available if you upgrade to TensorFlow 0.12 or later.

\n\n

To disable all logging output from TensorFlow, set the following environment variable before launching Python:

\n\n
$ export TF_CPP_MIN_LOG_LEVEL=3\n$ python ...\n
\n\n

You can also adjust the verbosity by changing the value of TF_CPP_MIN_LOG_LEVEL:

\n\n\n", + "system": "" + }, + { + "instruction": "Tensorflow multiple sessions with multiple GPUs", + "input": "", + "output": "

TensorFlow will attempt to use (an equal fraction of the memory of) all GPU devices that are visible to it. If you want to run different sessions on different GPUs, you should do the following.

\n\n
    \n
  1. Run each session in a different Python process.
  2. \n
  3. Start each process with a different value for the CUDA_VISIBLE_DEVICES environment variable. For example, if your script is called my_script.py and you have 4 GPUs, you could run the following:

    \n\n
    $ CUDA_VISIBLE_DEVICES=0 python my_script.py  # Uses GPU 0.\n$ CUDA_VISIBLE_DEVICES=1 python my_script.py  # Uses GPU 1.\n$ CUDA_VISIBLE_DEVICES=2,3 python my_script.py  # Uses GPUs 2 and 3.\n
    \n\n

    Note the GPU devices in TensorFlow will still be numbered from zero (i.e. \"/gpu:0\" etc.), but they will correspond to the devices that you have made visible with CUDA_VISIBLE_DEVICES.

  4. \n
\n", + "system": "" + }, + { + "instruction": "How to deal with batches with variable-length sequences in TensorFlow?", + "input": "", + "output": "

You can use the ideas of bucketing and padding which are described in:

\n\n

    Sequence-to-Sequence Models

\n\n

Also, the rnn function which creates RNN network accepts parameter sequence_length.

\n\n

As an example, you can create buckets of sentences of the same size, pad them with the necessary amount of zeros, or placeholders which stand for zero word and afterwards feed them along with seq_length = len(zero_words).

\n\n
seq_length = tf.placeholder(tf.int32)\noutputs, states = rnn.rnn(cell, inputs, initial_state=initial_state, sequence_length=seq_length)\n\nsess = tf.Session()\nfeed = {\n    seq_length: 20,\n    #other feeds\n}\nsess.run(outputs, feed_dict=feed)\n
\n\n

Take a look at this reddit thread as well:

\n\n

   Tensorflow basic RNN example with 'variable length' sequences

\n", + "system": "" + }, + { + "instruction": "How can I implement a custom RNN (specifically an ESN) in Tensorflow?", + "input": "", + "output": "

To give a quick summary:

\n\n

Look in the TensorFlow source code under python/ops/rnn_cell.py too see how to subclass RNNCell. It's usually like this:

\n\n
class MyRNNCell(RNNCell):\n  def __init__(...):\n\n  @property\n  def output_size(self):\n  ...\n\n  @property\n  def state_size(self):\n  ...\n\n  def __call__(self, input_, state, name=None):\n     ... your per-step iteration here ...\n
\n", + "system": "" + }, + { + "instruction": "Issue feeding a list into feed_dict in TensorFlow", + "input": "", + "output": "

There are two issues that are causing problems here:

\n\n

The first issue is that the Session.run() call only accepts a small number of types as the keys of the feed_dict. In particular, lists of tensors are not supported as keys, so you have to put each tensor as a separate key.* One convenient way to do this is using a dictionary comprehension:

\n\n
inputs = [tf.placeholder(...), ...]\ndata = [np.array(...), ...]\nsess.run(y, feed_dict={i: d for i, d in zip(inputs, data)})\n
\n\n

The second issue is that the 10 * [tf.placeholder(...)] syntax in Python creates a list with ten elements, where each element is the same tensor object (i.e. has the same name property, the same id property, and is reference-identical if you compare two elements from the list using inputs[i] is inputs[j]). This explains why, when you tried to create a dictionary using the list elements as keys, you ended up with a dictionary with a single element - because all of the list elements were identical.

\n\n

To create 10 different placeholder tensors, as you intended, you should instead do the following:

\n\n
inputs = [tf.placeholder(tf.float32, shape=(batch_size, input_size))\n          for _ in xrange(10)]\n
\n\n

If you print the elements of this list, you'll see that each element is a tensor with a different name.

\n\n
\n\n

EDIT: * You can now pass tuples as the keys of a feed_dict, because these may be used as dictionary keys.

\n", + "system": "" + }, + { + "instruction": "How to get the accuracy per epoch or step for the huggingface.transformers Trainer?", + "input": "", + "output": "

You can load the accuracy metric and make it work with your compute_metrics function. As an example, it would be like:

\n
from datasets import load_metric\nmetric = load_metric('accuracy')\n\ndef compute_metrics(eval_pred):\n    predictions, labels = eval_pred\n    predictions = np.argmax(predictions, axis=1)\n    return metric.compute(predictions=predictions, references=labels)\n
\n

This example of compute_metrics function is based on the Hugging Face's text classification tutorial. It worked in my tests.

\n", + "system": "" + }, + { + "instruction": "Tf 2.0 : RuntimeError: GradientTape.gradient can only be called once on non-persistent tapes", + "input": "", + "output": "

From the documentation of GradientTape:

\n
\n

By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a persistent gradient tape. This allows multiple calls to the gradient() method as resources are released when the tape object is garbage collected.

\n
\n

A persistent gradient can be created with with tf.GradientTape(persistent=True) as tape and can/should be manually deleted with del tape (credits for this @zwep, @Crispy13).

\n", + "system": "" + }, + { + "instruction": "How to install latest cuDNN to conda?", + "input": "", + "output": "
    \n
  1. conda update --force conda
  2. \n
  3. conda update conda
  4. \n
  5. conda install -c anaconda cudnn
  6. \n
  7. conda list cudnn
  8. \n
\n", + "system": "" + }, + { + "instruction": "Does bias in the convolutional layer really make a difference to the test accuracy?", + "input": "", + "output": "
\n

Biases are tuned alongside weights by learning algorithms such as\ngradient descent. biases differ from weights is that they are\nindependent of the output from previous layers. Conceptually bias is\ncaused by input from a neuron with a fixed activation of 1, and so is\nupdated by subtracting the just the product of the delta value and\nlearning rate.

\n
\n

In a large model, removing the bias inputs makes very little difference because each node can make a bias node out of the average activation of all of its inputs, which by the law of large numbers will be roughly normal. At the first layer, the ability for this to happens depends on your input distribution. On a small network, of course you need a bias input, but on a large network, removing it makes almost no difference.

\n

Although in a large network it has no difference, it still depends on network architecture. For instance in LSTM:

\n
\n

Most applications of LSTMs simply initialize the LSTMs with small\nrandom weights which works well on many problems. But this\ninitialization effectively sets the forget gate to 0.5. This\nintroduces a vanishing gradient with a factor of 0.5 per timestep,\nwhich can cause problems whenever the long term dependencies are\nparticularly severe. This problem is addressed by simply initializing the\nforget gates bias to a large value such as 1 or 2. By doing so, the\nforget gate will be initialized to a value that is close to 1,\nenabling gradient flow.

\n
\n

See also:

\n\n", + "system": "" + }, + { + "instruction": "Reproducible results in Tensorflow with tf.set_random_seed", + "input": "", + "output": "

In tensorflow, a random operation relies on two different seeds: a global seed, set by tf.set_random_seed, and an operation seed, provided as an argument to the operation. You will find more details on how they relate in the docs.

\n\n

You have a different seed for each random op because each random op maintains its own internal state for pseudo-random number generation. The reason for having each random generator maintaining its own state is to be robust to change: if they shared the same state, then adding a new random generator somewhere in your graph would change the values produced by all the other generators, defeating the purpose of using a seed.

\n\n

Now, why do we have this dual system of global and per-op seeds? Well, actually the global seed is not necessary. It is there for convenience: It allows to set all random op seeds to a different and deterministic (if unknown) value at once, without having to go exhaustively through all of them.

\n\n

Now when a global seed is set but not the op seed, according to the docs,

\n\n
\n

The system deterministically picks an operation seed in conjunction with the graph-level seed so that it gets a unique random sequence.

\n
\n\n

To be more precise, the seed that is provided is the id of the last operation that has been created in the current graph. Consequently, globally-seeded random operation are extremely sensitive to change in the graph, in particular to those created before itself.

\n\n

For example,

\n\n
import tensorflow as tf\ntf.set_random_seed(1234)\ngenerate = tf.random_uniform(())\nwith tf.Session() as sess:\n  print(generate.eval())\n  # 0.96046877\n
\n\n

Now if we create a node before, the result changes:

\n\n
import tensorflow as tf\ntf.set_random_seed(1234)\ntf.zeros(()) # new op added before \ngenerate = tf.random_uniform(())\nwith tf.Session() as sess:\n  print(generate.eval())\n  # 0.29252338\n
\n\n

If a node is create after however, it does not affect the op seed:

\n\n
import tensorflow as tf\ntf.set_random_seed(1234)\ngenerate = tf.random_uniform(())\ntf.zeros(()) # new op added after\nwith tf.Session() as sess:\n  print(generate.eval())\n  # 0.96046877\n
\n\n

Obviously, as in your case, if you generate several operations, they will have different seeds:

\n\n
import tensorflow as tf\ntf.set_random_seed(1234)\ngen1 = tf.random_uniform(())\ngen2 = tf.random_uniform(())\nwith tf.Session() as sess:\n  print(gen1.eval())\n  print(gen2.eval())\n  # 0.96046877\n  # 0.85591054\n
\n\n

As a curiosity, and to validate the fact that seeds are simply the last used id in the graph, you could align the seed of gen2 to gen1 with

\n\n
import tensorflow as tf\ntf.set_random_seed(1234)\ngen1 = tf.random_uniform(())\n# 4 operations seems to be created after seed has been picked\nseed = tf.get_default_graph()._last_id - 4\ngen2 = tf.random_uniform((), seed=seed)\nwith tf.Session() as sess:\n  print(gen1.eval())\n  print(gen2.eval())\n  # 0.96046877\n  # 0.96046877\n
\n\n

Obviously though, this should not pass code review.

\n", + "system": "" + }, + { + "instruction": "RuntimeError: main thread is not in main loop with Matplotlib and Flask", + "input": "", + "output": "

I was on the same situation, Flask with Matplotlib combo.\nWhat worked for me is to specify Agg as Matplotlib backend.

\n\n
import matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\n\n# Your code here\n
\n\n

You can refer to Matplotlib documentation (Matplotlib in a web application server) for the details.

\n", + "system": "" + }, + { + "instruction": "Illegal instruction(core dumped) tensorflow", + "input": "", + "output": "

I had the same problem and had to downgrade tensorflow to 1.5.0:

\n\n
pip uninstall tensorflow\npip install tensorflow==1.5.0\n
\n\n

Edit: As @Tobsta points out in the comments, the other option is to compile the binaries from source. The precompiled binaries of versions >1.5 use AVX instructions that are not supported by older CPUs

\n", + "system": "" + }, + { + "instruction": "What is the difference between tensors and sparse tensors?", + "input": "", + "output": "

Matthew did a great job but I would love to give an example to shed more light on Sparse tensors with a example.

\n\n

If a tensor has lots of values that are zero, it can be called sparse.

\n\n

Lets consider a sparse 1-D Tensor

\n\n
[0, 7, 0, 0, 8, 0, 0, 0, 0]\n
\n\n

A sparse representation of the same tensor will focus only on the non-zero values

\n\n
values = [7,8]\n
\n\n

We also have to remember where those values occurs, by their indices

\n\n
indices = [1,4]\n
\n\n

The one-dimensional indices form will work with some methods, for this one-dimensional example, but in general indices have multiple dimensions, so it will be more consistent (and work everywhere) to represent indices like this:

\n\n
indices = [[1], [4]]\n
\n\n

With values and indices, we don't have quite enough information yet. How many zeros are there? We represent dense shape of a tensor.

\n\n
 dense_shape = [9]\n
\n\n

These three things together, values, indices, and dense_shape, are a sparse representation of the tensor

\n\n

In tensorflow 2.0 it can be implemented as

\n\n
x = tf.SparseTensor(values=[7,8],indices=[[1],[4]],dense_shape=[9])\nx\n#o/p: <tensorflow.python.framework.sparse_tensor.SparseTensor at 0x7ff04a58c4a8>\n\nprint(x.values)\nprint(x.dense_shape)\nprint(x.indices)\n#o/p: \ntf.Tensor([7 8], shape=(2,), dtype=int32)\ntf.Tensor([9], shape=(1,), dtype=int64)\ntf.Tensor(\n[[1]\n [4]], shape=(2, 1), dtype=int64)\n
\n\n

EDITED to correct indices as pointed out in the comments.

\n", + "system": "" + }, + { + "instruction": "Getting around tf.argmax which is not differentiable", + "input": "", + "output": "

As aidan suggested, it's just a softargmax stretched to the limits by beta. We can use tf.nn.softmax to get around the numerical issues:

\n\n
def softargmax(x, beta=1e10):\n  x = tf.convert_to_tensor(x)\n  x_range = tf.range(x.shape.as_list()[-1], dtype=x.dtype)\n  return tf.reduce_sum(tf.nn.softmax(x*beta) * x_range, axis=-1)\n
\n", + "system": "" + }, + { + "instruction": "replicate a row tensor using tf.tile?", + "input": "", + "output": "

Take the following, vec is a vector, multiply is your m, the number of times to repeat the vector. tf.tile is performed on the vector and then using tf.reshape it is reshaped into the desired structure.

\n\n
import tensorflow as tf\n\nvec = tf.constant([1, 2, 3, 4])\nmultiply = tf.constant([3])\n\nmatrix = tf.reshape(tf.tile(vec, multiply), [ multiply[0], tf.shape(vec)[0]])\nwith tf.Session() as sess:\n    print(sess.run([matrix]))\n
\n\n

This results in:

\n\n
[array([[1, 2, 3, 4],\n       [1, 2, 3, 4],\n       [1, 2, 3, 4]], dtype=int32)]\n
\n", + "system": "" + }, + { + "instruction": "tensorflow einsum vs. matmul vs. tensordot", + "input": "", + "output": "

Both tf.tensordot() and tf.einsum() are syntactic sugar that wrap one or more invocations of tf.matmul() (although in some special cases tf.einsum() can reduce to the simpler elementwise tf.multiply()).

\n

In the limit, I'd expect all three functions to have equivalent performance for the same computation. However, for smaller matrices, it may be more efficient to use tf.matmul() directly, because it would yield a simpler TensorFlow graph with fewer operations, and hence the pre-operation invocation costs will be lower.

\n", + "system": "" + }, + { + "instruction": "How to get the type of a Tensor?", + "input": "", + "output": "

You can use get_shape() to get the shape of a tensorflow variable.

\n\n
>>> x = tf.Variable(tf.random_normal([256, 100]))\n>>> x.get_shape()\n(256, 100)\n
\n\n

You can use dtype property to get the type of a tensorflow variable.

\n\n
>>> x = tf.Variable(tf.random_normal([256, 100]))\n>>> x.dtype\n<dtype: 'float32_ref'>\n
\n\n

You can use as_numpy_dtype property of dtype to convert from tf.dtype to numpy dtype.

\n\n
>>> x = tf.Variable(tf.random_normal([256, 100]))\n>>> x.dtype.as_numpy_dtype\n<class 'numpy.float32'>\n
\n", + "system": "" + }, + { + "instruction": "How is tf.summary.tensor_summary meant to be used?", + "input": "", + "output": "

I cannot get it to work either. It seems like that feature is still under development. See this video from the TensorFlow Dev Summit that states that the tensor_summary is still under development (starting at 9:17): https://youtu.be/eBbEDRsCmv4?t=9m17s. It will probably be better defined and examples should be provided in the future.

\n", + "system": "" + }, + { + "instruction": "Split tensor into training and test sets", + "input": "", + "output": "

As elham mentioned, you can use scikit-learn to do this easily. scikit-learn is an open source library for machine learning. There are tons of tools for data preparation including the model_selection module, which handles comparing, validating and choosing parameters.

\n\n

The model_selection.train_test_split() method is specifically designed to split your data into train and test sets randomly and by percentage.

\n\n
X_train, X_test, y_train, y_test = train_test_split(features,\n                                                    labels,\n                                                    test_size=0.33,\n                                                    random_state=42)\n
\n\n

test_size is the percentage to reserve for testing and random_state is to seed the random sampling.

\n\n

I typically use this to provide train and validation data sets, and keep true test data separately. You could just run train_test_split twice to do this as well. I.e. split the data into (Train + Validation) and Test, then split Train + Validation into two separate tensors.

\n", + "system": "" + }, + { + "instruction": "Understanding `tf.nn.nce_loss()` in tensorflow", + "input": "", + "output": "

Let's look at the relative code in word2vec example (examples/tutorials/word2vec).

\n
embeddings = tf.Variable(\n    tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))\nembed = tf.nn.embedding_lookup(embeddings, train_inputs)\n
\n

These two lines create embedding representations. embeddings is a matrix where each row represents a word vector. embedding_lookup is a quick way to get vectors corresponding to train_inputs. In word2vec example, train_inputs consists of some int32 number, representing the id of target words. Basically, it can be placed by hidden layer feature.

\n
# Construct the variables for the NCE loss\nnce_weights = tf.Variable(\n    tf.truncated_normal([vocabulary_size, embedding_size],\n                        stddev=1.0 / math.sqrt(embedding_size)))\nnce_biases = tf.Variable(tf.zeros([vocabulary_size]))\n
\n

These two lines create parameters. They will be updated by optimizer during training. We can use tf.matmul(embed, tf.transpose(nce_weights)) + nce_biases to get final output score. In other words, last inner-product layer in classification can be replaced by it.

\n
loss = tf.reduce_mean(\n      tf.nn.nce_loss(weights=nce_weights,     # [vocab_size, embed_size]\n                   biases=nce_biases,         # [vocab_size]\n                   labels=train_labels,       # [bs, 1]\n                   inputs=embed,              # [bs, embed_size]\n                   num_sampled=num_sampled, \n                   num_classes=vocabulary_size))\n
\n

These lines create nce loss, @garej has given a very good explanation. num_sampled refers to the number of negative sampling in nce algorithm.

\n
\n

To illustrate the usage of nce, we can apply it in mnist example (examples/tutorials/mnist/mnist_deep.py) with following 2 steps:

\n

1. Replace embed with hidden layer output. The dimension of hidden layer is 1024 and num_output is 10. Minimum value of num_sampled is 1. Remember to remove the last inner-product layer in deepnn().

\n
y_conv, keep_prob = deepnn(x)                                            \n                                                                           \nnum_sampled = 1                                                          \nvocabulary_size = 10                                                     \nembedding_size = 1024                                                    \nwith tf.device('/cpu:0'):                                                \n  embed = y_conv                                                         \n  # Construct the variables for the NCE loss                             \n  nce_weights = tf.Variable(                                             \n      tf.truncated_normal([vocabulary_size, embedding_size],             \n                          stddev=1.0 / math.sqrt(embedding_size)))       \n  nce_biases = tf.Variable(tf.zeros([vocabulary_size])) \n
\n

2. Create loss and compute output. After computing the output, we can use it to calculate accuracy. Note that the label here is not one-hot vector as used in softmax. Labels are the original label of training samples.

\n
loss = tf.reduce_mean(                                   \n    tf.nn.nce_loss(weights=nce_weights,                           \n                   biases=nce_biases,                             \n                   labels=y_idx,                                  \n                   inputs=embed,                                  \n                   num_sampled=num_sampled,                       \n                   num_classes=vocabulary_size))                  \n                                                                    \noutput = tf.matmul(y_conv, tf.transpose(nce_weights)) + nce_biases\ncorrect_prediction = tf.equal(tf.argmax(output, 1), tf.argmax(y_, 1))\n
\n

When we set num_sampled=1, the val accuracy will end at around 98.8%. And if we set num_sampled=9, we can get almost the same val accuracy as trained by softmax. But note that nce is different from softmax.

\n

Full code of training mnist by nce can be found here. Hope it is helpful.

\n", + "system": "" + }, + { + "instruction": "How to download previous version of tensorflow?", + "input": "", + "output": "

It works for me, since I have 1.6

\n
pip install tensorflow==1.5\n
\n", + "system": "" + }, + { + "instruction": "How-to run TensorFlow on multiple core and threads", + "input": "", + "output": "

According to Tensorflow:

\n\n
\n

The two configurations listed below are used to optimize CPU performance by \n adjusting the thread pools.

\n \n \n \n

These configurations are set via the tf.ConfigProto and passed to\n tf.Session in the config attribute as shown in the snippet below. For both\n configuration options, if they are unset or set to 0, will default to the\n number of logical CPU cores. Testing has shown that the default is effective\n for systems ranging from one CPU with 4 cores to multiple CPUs with 70+\n combined logical cores. A common alternative optimization is to set the number\n of threads in both pools equal to the number of physical cores rather than\n logical cores

\n\n
config = tf.ConfigProto()\nconfig.intra_op_parallelism_threads = 44\nconfig.inter_op_parallelism_threads = 44\ntf.session(config=config)\n
\n
\n\n
\n\n
\n

In versions of TensorFlow before 1.2, It is recommended using multi-threaded,\n queue-based input pipelines for performance. Beginning with TensorFlow 1.4,\n however, It is recommended using the tf.data module instead.

\n
\n\n
\n\n

Yes, in Linux, you can check your CPU usage with top and press 1 to show the usage per CPU. note: The percentage depends on the Irix/Solaris mode.

\n", + "system": "" + }, + { + "instruction": "For what reason Convolution 1x1 is used in deep neural networks?", + "input": "", + "output": "

You can think about 1x1xD convolution as a dimensionality reduction technique when it's placed somewhere into a network.

\n\n

If you have an input volume of 100x100x512 and you convolve it with a set of D filters each one with size 1x1x512 you reduce the number of features from 512 to D.\nThe output volume is, therefore, 100x100xD.

\n\n

As you can see this (1x1x512)xD convolution is mathematically equivalent to a fully connected layer. The main difference is that whilst FC layer requires the input to have a fixed size, the convolutional layer can accept in input every volume with spatial extent greater or equal than 100x100.

\n\n

A 1x1xD convolution can substitute any fully connected layer because of this equivalence.

\n\n

\nIn addition, 1x1xD convolutions not only reduce the features in input to the next layer, but also introduces new parameters and new non-linearity into the network that will help to increase model accuracy.

\n\n

When the 1x1xD convolution is placed at the end of a classification network, it acts exactly as a FC layer, but instead of thinking about it as a dimensionality reduction technique it's more intuitive to think about it as a layer that will output a tensor with shape WxHxnum_classes.

\n\n

The spatial extent of the output tensor (identified by W and H) is dynamic and is determined by the locations of the input image that the network analyzed.

\n\n

If the network has been defined with an input of 200x200x3 and we give it in input an image with this size, the output will be a map with W = H = 1 and depth = num_classes.\nBut, if the input image have a spatial extent greater than 200x200 than the convolutional network will analyze different locations of the input image (just like a standard convolution does) and will produce a tensor with W > 1 and H > 1.\nThis is not possibile with a FC layer that constrains the network to accept fixed size input and produce fixed size output.

\n", + "system": "" + }, + { + "instruction": "What does it mean to unroll a RNN dynamically?", + "input": "", + "output": "

From the documentation I understand that what they are saying is that the parameter sequence_length in the rnn method affects the performance because when set, it will perform dynamic computation and it will stop before.

\n

For example, if the rnn largest input sequence has a length of 50, if the other sequences are shorter it will be better to set the sequence_length for each sequence, so that the computation for each sequence will stop when the sequence ends and won't compute the padding zeros until reaching 50 timesteps. However, if sequence_length is not provided, it will consider each sequence to have the same length, so it will treat the zeros used for padding as normal items in the sequence.

\n

This does not mean that dynamic_rnn is less performant, the documentation says that the parameter sequence_length will not affect the performance because the computation is already dynamic.

\n

Also according to this post about RNNs in Tensorflow:

\n
\n

Internally, tf.nn.rnn creates an unrolled graph for a fixed RNN length. That means, if you call tf.nn.rnn with inputs having 200 time steps you are creating a static graph with 200 RNN steps. First, graph creation is slow. Second, you\u2019re unable to pass in longer sequences (> 200) than you\u2019ve originally specified.

\n

tf.nn.dynamic_rnn solves this. It uses a tf.While loop to dynamically construct the graph when it is executed. That means graph creation is faster and you can feed batches of variable size. What about performance? You may think the static rnn is faster than its dynamic counterpart because it pre-builds the graph. In my experience that\u2019s not the case.

\n

In short, just use tf.nn.dynamic_rnn. There is no benefit to tf.nn.rnn and I wouldn\u2019t be surprised if it was deprecated in the future.

\n
\n

dynamic_rnn is even faster (or equal) so he suggests to use dynamic_rnn anyway.

\n", + "system": "" + }, + { + "instruction": "Does TensorFlow view all CPUs of one machine as ONE device?", + "input": "", + "output": "

By default all CPUs available to the process are aggregated under cpu:0 device.

\n\n

There's answer by mrry here showing how to create logical devices like /cpu:1, /cpu:2

\n\n

There doesn't seem to be working functionality to pin logical devices to specific physical cores or be able to use NUMA nodes in tensorflow.

\n\n

A possible work-around is to use distributed TensorFlow with multiple processes on one machine and use taskset on Linux to pin specific processes to specific cores

\n", + "system": "" + }, + { + "instruction": "TensorFlow - introducing both L2 regularization and dropout into the network. Does it makes any sense?", + "input": "", + "output": "

Ok, after some additional efforts I managed to solve it and introduce both L2 and dropout into my network, code is below. I got slight improvement over the same network without the dropout (with L2 in place). I am still not sure if it really worth the effort to introduce both of them, L2 and dropout but at least it works and slightly improves the results.

\n\n
#ANN with introduced dropout\n#This time we still use the L2 but restrict training dataset\n#to be extremely small\n\n#get just first 500 of examples, so that our ANN can memorize whole dataset\ntrain_dataset_2 = train_dataset[:500, :]\ntrain_labels_2 = train_labels[:500]\n\n#batch size for SGD and beta parameter for L2 loss\nbatch_size = 128\nbeta = 0.001\n\n#that's how many hidden neurons we want\nnum_hidden_neurons = 1024\n\n#building tensorflow graph\ngraph = tf.Graph()\nwith graph.as_default():\n  # Input data. For the training data, we use a placeholder that will be fed\n  # at run time with a training minibatch.\n  tf_train_dataset = tf.placeholder(tf.float32,\n                                    shape=(batch_size, image_size * image_size))\n  tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n  tf_valid_dataset = tf.constant(valid_dataset)\n  tf_test_dataset = tf.constant(test_dataset)\n\n  #now let's build our new hidden layer\n  #its weights\n  hidden_weights = tf.Variable(\n    tf.truncated_normal([image_size * image_size, num_hidden_neurons]))\n  hidden_biases = tf.Variable(tf.zeros([num_hidden_neurons]))\n\n  #now the layer itself. It multiplies data by weights, adds biases\n  #and takes ReLU over result\n  hidden_layer = tf.nn.relu(tf.matmul(tf_train_dataset, hidden_weights) + hidden_biases)\n\n  #add dropout on hidden layer\n  #we pick up the probabylity of switching off the activation\n  #and perform the switch off of the activations\n  keep_prob = tf.placeholder(\"float\")\n  hidden_layer_drop = tf.nn.dropout(hidden_layer, keep_prob)  \n\n  #time to go for output linear layer\n  #out weights connect hidden neurons to output labels\n  #biases are added to output labels  \n  out_weights = tf.Variable(\n    tf.truncated_normal([num_hidden_neurons, num_labels]))  \n\n  out_biases = tf.Variable(tf.zeros([num_labels]))  \n\n  #compute output\n  #notice that upon training we use the switched off activations\n  #i.e. the variaction of hidden_layer with the dropout active\n  out_layer = tf.matmul(hidden_layer_drop,out_weights) + out_biases\n  #our real output is a softmax of prior result\n  #and we also compute its cross-entropy to get our loss\n  #Notice - we introduce our L2 here\n  loss = (tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(\n    out_layer, tf_train_labels) +\n    beta*tf.nn.l2_loss(hidden_weights) +\n    beta*tf.nn.l2_loss(hidden_biases) +\n    beta*tf.nn.l2_loss(out_weights) +\n    beta*tf.nn.l2_loss(out_biases)))\n\n  #now we just minimize this loss to actually train the network\n  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n\n  #nice, now let's calculate the predictions on each dataset for evaluating the\n  #performance so far\n  # Predictions for the training, validation, and test data.\n  train_prediction = tf.nn.softmax(out_layer)\n  valid_relu = tf.nn.relu(  tf.matmul(tf_valid_dataset, hidden_weights) + hidden_biases)\n  valid_prediction = tf.nn.softmax( tf.matmul(valid_relu, out_weights) + out_biases) \n\n  test_relu = tf.nn.relu( tf.matmul( tf_test_dataset, hidden_weights) + hidden_biases)\n  test_prediction = tf.nn.softmax(tf.matmul(test_relu, out_weights) + out_biases)\n\n\n\n#now is the actual training on the ANN we built\n#we will run it for some number of steps and evaluate the progress after \n#every 500 steps\n\n#number of steps we will train our ANN\nnum_steps = 3001\n\n#actual training\nwith tf.Session(graph=graph) as session:\n  tf.initialize_all_variables().run()\n  print(\"Initialized\")\n  for step in range(num_steps):\n    # Pick an offset within the training data, which has been randomized.\n    # Note: we could use better randomization across epochs.\n    offset = (step * batch_size) % (train_labels_2.shape[0] - batch_size)\n    # Generate a minibatch.\n    batch_data = train_dataset_2[offset:(offset + batch_size), :]\n    batch_labels = train_labels_2[offset:(offset + batch_size), :]\n    # Prepare a dictionary telling the session where to feed the minibatch.\n    # The key of the dictionary is the placeholder node of the graph to be fed,\n    # and the value is the numpy array to feed to it.\n    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, keep_prob : 0.5}\n    _, l, predictions = session.run(\n      [optimizer, loss, train_prediction], feed_dict=feed_dict)\n    if (step % 500 == 0):\n      print(\"Minibatch loss at step %d: %f\" % (step, l))\n      print(\"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels))\n      print(\"Validation accuracy: %.1f%%\" % accuracy(\n        valid_prediction.eval(), valid_labels))\n      print(\"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels))\n
\n", + "system": "" + }, + { + "instruction": "Basic 1d convolution in tensorflow", + "input": "", + "output": "

I am sorry to say that, but your first code was almost right. You just inverted x and phi in tf.nn.conv2d:

\n\n
g = tf.Graph()\nwith g.as_default():\n    # data shape is \"[batch, in_height, in_width, in_channels]\",\n    x = tf.Variable(np.array([0.0, 0.0, 0.0, 0.0, 1.0]).reshape(1, 1, 5, 1), name=\"x\")\n    # filter shape is \"[filter_height, filter_width, in_channels, out_channels]\"\n    phi = tf.Variable(np.array([0.0, 0.5, 1.0]).reshape(1, 3, 1, 1), name=\"phi\")\n    conv = tf.nn.conv2d(\n        x,\n        phi,\n        strides=[1, 1, 1, 1],\n        padding=\"SAME\",\n        name=\"conv\")\n
\n\n
\n\n

Update: TensorFlow now supports 1D convolution since version r0.11, using tf.nn.conv1d. I previously made a guide to use them in the stackoverflow documentation (now extinct) that I'm pasting here:

\n\n
\n\n

Guide to 1D convolution

\n\n

Consider a basic example with an input of length 10, and dimension 16. The batch size is 32. We therefore have a placeholder with input shape [batch_size, 10, 16].

\n\n
batch_size = 32\nx = tf.placeholder(tf.float32, [batch_size, 10, 16])\n
\n\n

We then create a filter with width 3, and we take 16 channels as input, and output also 16 channels.

\n\n
filter = tf.zeros([3, 16, 16])  # these should be real values, not 0\n
\n\n
\n\n

Finally we apply tf.nn.conv1d with a stride and a padding:\n- stride: integer s\n- padding: this works like in 2D, you can choose between SAME and VALID. SAME will output the same input length, while VALID will not add zero padding.

\n\n

For our example we take a stride of 2, and a valid padding.\n

\n\n
output = tf.nn.conv1d(x, filter, stride=2, padding=\"VALID\")\n
\n\n

The output shape should be [batch_size, 4, 16].
\nWith padding=\"SAME\", we would have had an output shape of [batch_size, 5, 16].

\n", + "system": "" + }, + { + "instruction": "Which Google Cloud Platform service is the easiest for running Tensorflow?", + "input": "", + "output": "

Summing up the answers:

\n\n\n\n

Instructions to manually run TensorFlow on Compute Engine:

\n\n
    \n
  1. Create a project
  2. \n
  3. Open the Cloud Shell (a button at the top)
  4. \n
  5. List machine types: gcloud compute machine-types list. You can change the machine type I used in the next command.
  6. \n
  7. Create an instance:
  8. \n
\n\n\n\n
gcloud compute instances create tf \\\n  --image container-vm \\\n  --zone europe-west1-c \\\n  --machine-type n1-standard-2\n
\n\n
    \n
  1. Run sudo docker run -d -p 8888:8888 --name tf b.gcr.io/tensorflow-udacity/assignments:0.5.0 (change the image name to the desired one)
  2. \n
  3. Find your instance in the dashboard and edit default network.
  4. \n
  5. Add a firewall rule to allow your IP as well as protocol and port tcp:8888.
  6. \n
  7. Find the External IP of the instance from the dashboard. Open IP:8888 on your browser. Done!
  8. \n
  9. When you are finished, delete the created cluster to avoid charges.
  10. \n
\n\n

This is how I did it and it worked. I am sure there is an easier way to do it.

\n\n

More Resources

\n\n

You might be interested to learn more about:

\n\n\n\n

Good to know

\n\n\n\n

Thanks to @user728291, @MattW, @CJCullen, and @zain-rizvi

\n", + "system": "" + }, + { + "instruction": "Siamese Neural Network in TensorFlow", + "input": "", + "output": "

Update with tf.layers

\n\n

If you use the tf.layers module to build your network, you can simply use the argument reuse=True for the second part of the Siamese network:

\n\n
x = tf.ones((1, 3))\ny1 = tf.layers.dense(x, 4, name='h1')\ny2 = tf.layers.dense(x, 4, name='h1', reuse=True)\n\n# y1 and y2 will evaluate to the same values\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())\nprint(sess.run(y1))\nprint(sess.run(y2))  # both prints will return the same values\n
\n\n
\n\n

Old answer with tf.get_variable

\n\n

You can try using the function tf.get_variable(). (See the tutorial)

\n\n

Implement the first network using a variable scope with reuse=False:

\n\n
with tf.variable_scope('Inference', reuse=False):\n    weights_1 = tf.get_variable('weights', shape=[1, 1],\n                              initializer=...)\n    output_1 = weights_1 * input_1\n
\n\n

Then implement the second with the same code except using reuse=True

\n\n
with tf.variable_scope('Inference', reuse=True):\n    weights_2 = tf.get_variable('weights')\n    output_2 = weights_2 * input_2\n
\n\n

The first implementation will create and initialize every variable of the LSTM, whereas the second implementation will use tf.get_variable() to get the same variables used in the first network. That way, variables will be shared.

\n\n

Then you just have to use whatever loss you want (e.g. you can use the L2 distance between the two siamese networks), and the gradients will backpropagate through both networks, updating the shared variables with the sum of the gradients.

\n", + "system": "" + }, + { + "instruction": "What is the purpose of graph collections in TensorFlow?", + "input": "", + "output": "

Remember that under the hood, Tensorflow is a system for specifying and then executing computational data flow graphs. The graph collections are used as part of keeping track of the constructed graphs and how they must be executed. For example, when you create certain kinds of ops, such as tf.train.batch_join, the code that adds the op will also add some queue runners to the QUEUE_RUNNERS graph collection. Later, when you call start_queue_runners(), by default, it will look at the QUEUE_RUNNERS collection to know which runners to start.

\n", + "system": "" + }, + { + "instruction": "TensorFlow Error found in Tutorial", + "input": "", + "output": "

I figured it out. As you see in the value error, it says No default session is registered. Use 'with DefaultSession(sess)' or pass an explicit session to eval(session=sess) so the answer I came up with is to pass an explicit session to eval, just like it says. Here is where I made the changes.

\n\n
if i%100 == 0:\n        train_accuracy = accuracy.eval(session=sess, feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})\n
\n\n

And

\n\n
train_step.run(session=sess, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})\n
\n\n

Now the code is working fine.

\n", + "system": "" + }, + { + "instruction": "Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext", + "input": "", + "output": "

The issue has resolved after I uninstalled and re-installed pyspark using following commands:

\n
pip uninstall pyspark\npip install pyspark\n
\n", + "system": "" + }, + { + "instruction": "how to normalize input data for models in tensorflow", + "input": "", + "output": "

There are different ways of \"normalizing data\". Depending which one you have in mind, it may or may not be easy to implement in your case.

\n\n

1. Fixed normalization

\n\n

If you know the fixed range(s) of your values (e.g. feature #1 has values in [-5, 5], feature #2 has values in [0, 100], etc.), you could easily pre-process your feature tensor in parse_example(), e.g.:

\n\n
def normalize_fixed(x, current_range, normed_range):\n    current_min, current_max = tf.expand_dims(current_range[:, 0], 1), tf.expand_dims(current_range[:, 1], 1)\n    normed_min, normed_max = tf.expand_dims(normed_range[:, 0], 1), tf.expand_dims(normed_range[:, 1], 1)\n    x_normed = (x - current_min) / (current_max - current_min)\n    x_normed = x_normed * (normed_max - normed_min) + normed_min\n    return x_normed\n\ndef parse_example(line_batch, \n                  fixed_range=[[-5, 5], [0, 100], ...],\n                  normed_range=[[0, 1]]):\n    # ...\n    features = tf.transpose(features)\n    features = normalize_fixed(features, fixed_range, normed_range)\n    # ...\n
\n\n

2. Per-sample normalization

\n\n

If your features are supposed to have approximately the same range of values, per-sample normalization could also be considered, i.e. applying normalization considering the features moments (mean, variance) for each sample:

\n\n
def normalize_with_moments(x, axes=[0, 1], epsilon=1e-8):\n    mean, variance = tf.nn.moments(x, axes=axes)\n    x_normed = (x - mean) / tf.sqrt(variance + epsilon) # epsilon to avoid dividing by zero\n    return x_normed\n\ndef parse_example(line_batch):\n    # ...\n    features = tf.transpose(features)\n    features = normalize_with_moments(features)\n    # ...\n
\n\n

3. Batch normalization

\n\n

You could apply the same procedure over a complete batch instead of per-sample, which may make the process more stable:

\n\n
data_batch = normalize_with_moments(data_batch, axis=[1, 2])\n
\n\n

Similarly, you could use tf.nn.batch_normalization

\n\n

4. Dataset normalization

\n\n

Normalizing using the mean/variance computed over the whole dataset would be the trickiest, since as you mentioned it is a large, split one.

\n\n

tf.data.Dataset isn't really meant for such global computation. A solution would be to use whatever tools you have to pre-compute the dataset moments, then use this information for your TF pre-processing.

\n\n
\n\n

As mentioned by @MiniQuark, Tensorflow has a Transform library you could use to preprocess your data. Have a look at the Get Started, or for instance at the tft.scale_to_z_score() method for sample normalization.

\n", + "system": "" + }, + { + "instruction": "Why tensorflow uses channel-last ordering instead of row-major?", + "input": "", + "output": "

Here's the explanation:

\n\n

https://www.tensorflow.org/performance/performance_guide#use_nchw_image_data_format

\n\n
\n

Image data format refers to the representation of batches of images. TensorFlow supports NHWC (TensorFlow default) and NCHW (cuDNN default). N refers to the number of images in a batch, H refers to the number of pixels in the vertical dimension, W refers to the number of pixels in the horizontal dimension, and C refers to the channels (e.g. 1 for black and white, 3 for RGB, etc.) Although cuDNN can operate on both formats, it is faster to operate in its default format.

\n \n

The best practice is to build models that work with both NCHW and NHWC as it is common to train using NCHW on GPU, and then do inference with NHWC on CPU.

\n \n

The very brief history of these two formats is that TensorFlow started by using NHWC because it was a little faster on CPUs. Then the TensorFlow team discovered that NCHW performs better when using the NVIDIA cuDNN library. The current recommendation is that users support both formats in their models. In the long term, we plan to rewrite graphs to make switching between the formats transparent.

\n
\n\n

Moreover, digging into the code we can see here that when the input is in the format NHWC, tensorflow converts it for you to NCHW.

\n\n
  if (data_format == FORMAT_NHWC) {\n    // Convert the input tensor from NHWC to NCHW.\n    TensorShape nchw_shape =\n        ShapeFromFormat(FORMAT_NCHW, in_batch, in_rows, in_cols, in_depths);\n    if (in_depths > 1) {\n      Tensor transformed_input;\n      OP_REQUIRES_OK(ctx, ctx->allocate_temp(DataTypeToEnum<T>::value,\n                                             nchw_shape, &transformed_input));\n      functor::NHWCToNCHW<GPUDevice, T, 4>()(\n          ctx->eigen_device<GPUDevice>(),\n          const_cast<const Tensor&>(input).tensor<T, 4>(),\n          transformed_input.tensor<T, 4>());\n      input = transformed_input;\n    } else {\n      // If depth <= 1, then just reshape.\n      CHECK(input.CopyFrom(input, nchw_shape));\n    }\n  }\n
\n\n

You can specify the data format you want to use for every operation but tensorflow at default doesn't use NCHW but NHWC, that's why even the TF defelopers still use NHWC to avoid to specify in every operation the format

\n", + "system": "" + }, + { + "instruction": "Does TensorFlow job use multiple cores by default?", + "input": "", + "output": "

I found existing answer to this question. All cores are wrapped in cpu:0, i.e., TensorFlow does indeed use multiple CPU cores by default.

\n", + "system": "" + }, + { + "instruction": "What is SYCL 1.2?", + "input": "", + "output": "

SYCL is a C++ abstraction layer for OpenCL. TensorFlow's experimental support for OpenCL uses SYCL, in conjunction with a SYCL-aware C++ compiler.

\n\n

As Yaroslav pointed out in his comment, SYCL is only required if you are building TensorFlow with OpenCL support. The following question during the execution of ./configure asks about OpenCL support:

\n\n
Do you wish to build TensorFlow with OpenCL support? [y/N]\n
\n\n

If you answer N, you will not have to supply a SYCL path.

\n", + "system": "" + }, + { + "instruction": "Tensorflow TypeError: Fetch argument None has invalid type <type 'NoneType'>?", + "input": "", + "output": "

You are re-assigning the train_step variable to the second element of the result of sess.run() (which happens to be None). Hence, on the second iteration, train_step is None, which leads to the error.

\n\n

The fix is fortunately simple:

\n\n
for i in xrange(1, ITERATIONS):\n\n    # ...\n\n    # Discard the second element of the result.\n    numpy_state, _ = sess.run([final_state, train_step], feed_dict={\n        initial_state: numpy_state,\n        input_sequence: batch[0],\n        output_actual: batch[1]\n        })\n
\n", + "system": "" + }, + { + "instruction": "Why tensorflow uses channel-last ordering instead of row-major?", + "input": "", + "output": "

Here's the explanation:

\n\n

https://www.tensorflow.org/performance/performance_guide#use_nchw_image_data_format

\n\n
\n

Image data format refers to the representation of batches of images. TensorFlow supports NHWC (TensorFlow default) and NCHW (cuDNN default). N refers to the number of images in a batch, H refers to the number of pixels in the vertical dimension, W refers to the number of pixels in the horizontal dimension, and C refers to the channels (e.g. 1 for black and white, 3 for RGB, etc.) Although cuDNN can operate on both formats, it is faster to operate in its default format.

\n \n

The best practice is to build models that work with both NCHW and NHWC as it is common to train using NCHW on GPU, and then do inference with NHWC on CPU.

\n \n

The very brief history of these two formats is that TensorFlow started by using NHWC because it was a little faster on CPUs. Then the TensorFlow team discovered that NCHW performs better when using the NVIDIA cuDNN library. The current recommendation is that users support both formats in their models. In the long term, we plan to rewrite graphs to make switching between the formats transparent.

\n
\n\n

Moreover, digging into the code we can see here that when the input is in the format NHWC, tensorflow converts it for you to NCHW.

\n\n
  if (data_format == FORMAT_NHWC) {\n    // Convert the input tensor from NHWC to NCHW.\n    TensorShape nchw_shape =\n        ShapeFromFormat(FORMAT_NCHW, in_batch, in_rows, in_cols, in_depths);\n    if (in_depths > 1) {\n      Tensor transformed_input;\n      OP_REQUIRES_OK(ctx, ctx->allocate_temp(DataTypeToEnum<T>::value,\n                                             nchw_shape, &transformed_input));\n      functor::NHWCToNCHW<GPUDevice, T, 4>()(\n          ctx->eigen_device<GPUDevice>(),\n          const_cast<const Tensor&>(input).tensor<T, 4>(),\n          transformed_input.tensor<T, 4>());\n      input = transformed_input;\n    } else {\n      // If depth <= 1, then just reshape.\n      CHECK(input.CopyFrom(input, nchw_shape));\n    }\n  }\n
\n\n

You can specify the data format you want to use for every operation but tensorflow at default doesn't use NCHW but NHWC, that's why even the TF defelopers still use NHWC to avoid to specify in every operation the format

\n", + "system": "" + }, + { + "instruction": "SavedModel file does not exist when using Tensorflow hub", + "input": "", + "output": "

So, just deleting that folder and running the hub.load() function again solves the issue

\n", + "system": "" + }, + { + "instruction": "TensorFlow 2.0: do you need a @tf.function decorator on top of each function?", + "input": "", + "output": "

@tf.function converts a Python function to its graph representation.

\n\n

The pattern to follow is to define the training step function, that's the most computationally intensive function, and decorate it with @tf.function.

\n\n

Usually, the code looks like:

\n\n
#model,loss, and optimizer defined previously\n\n@tf.function\ndef train_step(features, labels):\n   with tf.GradientTape() as tape:\n        predictions = model(features)\n        loss_value = loss(labels, predictions)\n    gradients = tape.gradient(loss, model.trainable_variables)\n    optimizer.apply_gradients(zip(gradients, model.trainable_variables))\n    return loss_value\n\nfor features, labels in dataset:\n    lv = train_step(features, label)\n    print(\"loss: \", lv)\n
\n", + "system": "" + }, + { + "instruction": "logits and labels must be broadcastable error in Tensorflow RNN", + "input": "", + "output": "

Make sure that the number of labels in the final classification layer is equal to the number of classes you have in your dataset. InvalidArgumentError (see above for traceback): logits and labels must be broadcastable: logits_size=[1,2] labels_size=[1,24] as shown in your question might suggest that you are have just two classes in your final classification layer while you actually need 24.

\n

In my case, I had 7 classes in my dataset, but I mistakenly used 4 labels in the final classification layer. Therefore, I had to change from

\n

tf.keras.layers.Dense(4, activation="softmax")

\n

to

\n

tf.keras.layers.Dense(7, activation="softmax")

\n", + "system": "" + }, + { + "instruction": "tf.multiply vs tf.matmul to calculate the dot product", + "input": "", + "output": "

tf.multiply(X, Y) or the * operator does element-wise multiplication so that:

\n
[[1 2]    [[1 3]      [[1 6]\n [3 4]] .  [2 1]]  =   [6 4]]\n
\n

wheras tf.matmul does matrix multiplication so that:

\n
[[1 0]    [[1 3]      [[1 3]\n [0 1]] .  [2 1]]  =   [2 1]]\n
\n

using tf.matmul(X, X, transpose_b=True) means that you are calculating X . X^T where ^T indicates the transposing of the matrix and . is the matrix multiplication.

\n

tf.reduce_sum(_, axis=1) takes the sum along 1st axis (starting counting with 0) which means you are suming the rows:

\n
tf.reduce_sum([[a, b], [c, d]], axis=1) = [a+b, c+d]\n
\n

This means that:

\n
tf.reduce_sum(tf.multiply(X, X), axis=1) = [X[1].X[1], ..., X[n].X[n]]\n
\n

so that is the one you want if you only want the norms of each rows. On the other hand:

\n
tf.matmul(X, X, transpose_b=True) = [\n                                      [ X[1].X[1], X[1].X[2], ..., X[1].X[n] ], \n                                      [ X[2].X[1], ..., X[2].X[n] ],\n                                       ...\n                                      [ X[n].X[1], ..., X[n].X[n] ]\n                                   ]\n
\n

so that is what you need if you want the similarity between all pairs of rows.

\n", + "system": "" + }, + { + "instruction": "How to import an saved Tensorflow model train using tf.estimator and predict on input data", + "input": "", + "output": "

I tried to search for a good base example, but it appears the documentation and samples are a bit scattered for this topic. So let's start with a base example: the tf.estimator quickstart.

\n\n

That particular example doesn't actually export a model, so let's do that (not need for use case 1):

\n\n
def serving_input_receiver_fn():\n  \"\"\"Build the serving inputs.\"\"\"\n  # The outer dimension (None) allows us to batch up inputs for\n  # efficiency. However, it also means that if we want a prediction\n  # for a single instance, we'll need to wrap it in an outer list.\n  inputs = {\"x\": tf.placeholder(shape=[None, 4], dtype=tf.float32)}\n  return tf.estimator.export.ServingInputReceiver(inputs, inputs)\n\nexport_dir = classifier.export_savedmodel(\n    export_dir_base=\"/path/to/model\",\n    serving_input_receiver_fn=serving_input_receiver_fn)\n
\n\n

Huge asterisk on this code: there appears to be a bug in TensorFlow 1.3 that doesn't allow you to do the above export on a \"canned\" estimator (such as DNNClassifier). For a workaround, see the \"Appendix: Workaround\" section.

\n\n

The code below references export_dir (return value from the export step) to emphasize that it is not \"/path/to/model\", but rather, a subdirectory of that directory whose name is a timestamp.

\n\n

Use Case 1: Perform prediction in the same process as training

\n\n

This is an sci-kit learn type of experience, and is already exemplified by the sample. For completeness' sake, you simply call predict on the trained model:

\n\n
classifier.train(input_fn=train_input_fn, steps=2000)\n# [...snip...]\npredictions = list(classifier.predict(input_fn=predict_input_fn))\npredicted_classes = [p[\"classes\"] for p in predictions]\n
\n\n

Use Case 2: Load a SavedModel into Python/Java/C++ and perform predictions

\n\n

Python Client

\n\n

Perhaps the easiest thing to use if you want to do prediction in Python is SavedModelPredictor. In the Python program that will use the SavedModel, we need code like this:

\n\n
from tensorflow.contrib import predictor\n\npredict_fn = predictor.from_saved_model(export_dir)\npredictions = predict_fn(\n    {\"x\": [[6.4, 3.2, 4.5, 1.5],\n           [5.8, 3.1, 5.0, 1.7]]})\nprint(predictions['scores'])\n
\n\n

Java Client

\n\n
package dummy;\n\nimport java.nio.FloatBuffer;\nimport java.util.Arrays;\nimport java.util.List;\n\nimport org.tensorflow.SavedModelBundle;\nimport org.tensorflow.Session;\nimport org.tensorflow.Tensor;\n\npublic class Client {\n\n  public static void main(String[] args) {\n    Session session = SavedModelBundle.load(args[0], \"serve\").session();\n\n    Tensor x =\n        Tensor.create(\n            new long[] {2, 4},\n            FloatBuffer.wrap(\n                new float[] {\n                  6.4f, 3.2f, 4.5f, 1.5f,\n                  5.8f, 3.1f, 5.0f, 1.7f\n                }));\n\n    // Doesn't look like Java has a good way to convert the\n    // input/output name (\"x\", \"scores\") to their underlying tensor,\n    // so we hard code them (\"Placeholder:0\", ...).\n    // You can inspect them on the command-line with saved_model_cli:\n    //\n    // $ saved_model_cli show --dir $EXPORT_DIR --tag_set serve --signature_def serving_default\n    final String xName = \"Placeholder:0\";\n    final String scoresName = \"dnn/head/predictions/probabilities:0\";\n\n    List<Tensor> outputs = session.runner()\n        .feed(xName, x)\n        .fetch(scoresName)\n        .run();\n\n    // Outer dimension is batch size; inner dimension is number of classes\n    float[][] scores = new float[2][3];\n    outputs.get(0).copyTo(scores);\n    System.out.println(Arrays.deepToString(scores));\n  }\n}\n
\n\n

C++ Client

\n\n

You'll likely want to use tensorflow::LoadSavedModel with Session.

\n\n
#include <unordered_set>\n#include <utility>\n#include <vector>\n\n#include \"tensorflow/cc/saved_model/loader.h\"\n#include \"tensorflow/core/framework/tensor.h\"\n#include \"tensorflow/core/public/session.h\"\n\nnamespace tf = tensorflow;\n\nint main(int argc, char** argv) {\n  const string export_dir = argv[1];\n\n  tf::SavedModelBundle bundle;\n  tf::Status load_status = tf::LoadSavedModel(\n      tf::SessionOptions(), tf::RunOptions(), export_dir, {\"serve\"}, &bundle);\n  if (!load_status.ok()) {\n    std::cout << \"Error loading model: \" << load_status << std::endl;\n    return -1;\n  }\n\n  // We should get the signature out of MetaGraphDef, but that's a bit\n  // involved. We'll take a shortcut like we did in the Java example.\n  const string x_name = \"Placeholder:0\";\n  const string scores_name = \"dnn/head/predictions/probabilities:0\";\n\n  auto x = tf::Tensor(tf::DT_FLOAT, tf::TensorShape({2, 4}));\n  auto matrix = x.matrix<float>();\n  matrix(0, 0) = 6.4;\n  matrix(0, 1) = 3.2;\n  matrix(0, 2) = 4.5;\n  matrix(0, 3) = 1.5;\n  matrix(0, 1) = 5.8;\n  matrix(0, 2) = 3.1;\n  matrix(0, 3) = 5.0;\n  matrix(0, 4) = 1.7;\n\n  std::vector<std::pair<string, tf::Tensor>> inputs = {{x_name, x}};\n  std::vector<tf::Tensor> outputs;\n\n  tf::Status run_status =\n      bundle.session->Run(inputs, {scores_name}, {}, &outputs);\n  if (!run_status.ok()) {\n    cout << \"Error running session: \" << run_status << std::endl;\n    return -1;\n  }\n\n  for (const auto& tensor : outputs) {\n    std::cout << tensor.matrix<float>() << std::endl;\n  }\n}\n
\n\n

Use Case 3: Serve a model using TensorFlow Serving

\n\n

Exporting models in a manner amenable to serving a Classification model requires that the input be a tf.Example object. Here's how we might export a model for TensorFlow serving:

\n\n
def serving_input_receiver_fn():\n  \"\"\"Build the serving inputs.\"\"\"\n  # The outer dimension (None) allows us to batch up inputs for\n  # efficiency. However, it also means that if we want a prediction\n  # for a single instance, we'll need to wrap it in an outer list.\n  example_bytestring = tf.placeholder(\n      shape=[None],\n      dtype=tf.string,\n  )\n  features = tf.parse_example(\n      example_bytestring,\n      tf.feature_column.make_parse_example_spec(feature_columns)\n  )\n  return tf.estimator.export.ServingInputReceiver(\n      features, {'examples': example_bytestring})\n\nexport_dir = classifier.export_savedmodel(\n    export_dir_base=\"/path/to/model\",\n    serving_input_receiver_fn=serving_input_receiver_fn)\n
\n\n

The reader is referred to TensorFlow Serving's documentation for more instructions on how to setup TensorFlow Serving, so I'll only provide the client code here:

\n\n
  # Omitting a bunch of connection/initialization code...\n  # But at some point we end up with a stub whose lifecycle\n  # is generally longer than that of a single request.\n  stub = create_stub(...)\n\n  # The actual values for prediction. We have two examples in this\n  # case, each consisting of a single, multi-dimensional feature `x`.\n  # This data here is the equivalent of the map passed to the \n  # `predict_fn` in use case #2.\n  examples = [\n    tf.train.Example(\n      features=tf.train.Features(\n        feature={\"x\": tf.train.Feature(\n          float_list=tf.train.FloatList(value=[6.4, 3.2, 4.5, 1.5]))})),\n    tf.train.Example(\n      features=tf.train.Features(\n        feature={\"x\": tf.train.Feature(\n          float_list=tf.train.FloatList(value=[5.8, 3.1, 5.0, 1.7]))})),\n  ]\n\n  # Build the RPC request.\n  predict_request = predict_pb2.PredictRequest()\n  predict_request.model_spec.name = \"default\"\n  predict_request.inputs[\"examples\"].CopyFrom(\n      tensor_util.make_tensor_proto(examples, tf.float32))\n\n  # Perform the actual prediction.\n  stub.Predict(request, PREDICT_DEADLINE_SECS)\n
\n\n

Note that the key, examples, that is referenced in the predict_request.inputs needs to match the key used in the serving_input_receiver_fn at export time (cf. the constructor to ServingInputReceiver in that code).

\n\n

Appendix: Working around Exports from Canned Models in TF 1.3

\n\n

There appears to be a bug in TensorFlow 1.3 in which canned models do not export properly for use case 2 (the problem does not exist for \"custom\" estimators). Here's is a workaround that wraps a DNNClassifier to make things work, specifically for the Iris example:

\n\n
# Build 3 layer DNN with 10, 20, 10 units respectively.\nclass Wrapper(tf.estimator.Estimator):\n  def __init__(self, **kwargs):\n    dnn = tf.estimator.DNNClassifier(**kwargs)\n\n    def model_fn(mode, features, labels):\n      spec = dnn._call_model_fn(features, labels, mode)\n      export_outputs = None\n      if spec.export_outputs:\n        export_outputs = {\n           \"serving_default\": tf.estimator.export.PredictOutput(\n                  {\"scores\": spec.export_outputs[\"serving_default\"].scores,\n                   \"classes\": spec.export_outputs[\"serving_default\"].classes})}\n\n      # Replace the 3rd argument (export_outputs)\n      copy = list(spec)\n      copy[4] = export_outputs\n      return tf.estimator.EstimatorSpec(mode, *copy)\n\n    super(Wrapper, self).__init__(model_fn, kwargs[\"model_dir\"], dnn.config)\n\nclassifier = Wrapper(feature_columns=feature_columns,\n                     hidden_units=[10, 20, 10],\n                     n_classes=3,\n                     model_dir=\"/tmp/iris_model\")\n
\n", + "system": "" + }, + { + "instruction": "Install TensorFlow with specific version on Anaconda", + "input": "", + "output": "

I find the existing answers unsatisfying, as the OP asked specifically about Anaconda but the answers are just pip installs.

\n

You can list the available versions for install doing

\n
conda search tensorflow-gpu\n
\n

which should give you some output that looks like

\n
Loading channels: done\n# Name                       Version           Build  Channel             \ntensorflow-gpu                 1.4.1               0  pkgs/main           \ntensorflow-gpu                 1.5.0               0  pkgs/main           \ntensorflow-gpu                 1.6.0               0  pkgs/main           \ntensorflow-gpu                 1.7.0               0  pkgs/main           \ntensorflow-gpu                 1.8.0      h7b35bdc_0  pkgs/main           \ntensorflow-gpu                 1.9.0      hf154084_0  pkgs/main           \ntensorflow-gpu                1.10.0      hf154084_0  pkgs/main           \ntensorflow-gpu                1.11.0      h0d30ee6_0  pkgs/main           \ntensorflow-gpu                1.12.0      h0d30ee6_0  pkgs/main           \ntensorflow-gpu                1.13.1      h0d30ee6_0  pkgs/main           \ntensorflow-gpu                1.14.0      h0d30ee6_0  pkgs/main           \ntensorflow-gpu                1.15.0      h0d30ee6_0  pkgs/main           \ntensorflow-gpu                 2.0.0      h0d30ee6_0  pkgs/main           \ntensorflow-gpu                 2.1.0      h0d30ee6_0  pkgs/main           \ntensorflow-gpu                 2.2.0      h0d30ee6_0  pkgs/main\n
\n

If you need to specify a particular channel, the -c/--channel option is your friend, for example:

\n
conda search -c conda-forge tensorflow-gpu\n
\n

Then you can select your version by passing it to the install command, for example:

\n
conda install tensorflow-gpu==2.0.0\n
\n

If you needed the channel option in your search, you should add the same option to the conda install command. Note this will work the same for tensorflow (i.e. not the GPU version), just change the package name accordingly.

\n

YAML Configuration

\n

If you use YAML environment configuration files, you can do the same thing:

\n
# environment.yaml\nname: my_conda_env\nchannels:\n  - conda-forge\ndependencies:\n  - tensorflow-gpu=2.0.0\n
\n

Create your environment with command:

\n
conda env create -f environment.yaml\n
\n

or if you change the version of an already created environment:

\n
conda env update -f environment.yaml\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow:ValueError: 'images' contains no shape", + "input": "", + "output": "

It's important to pass expand_animations = False as an argument:

\n\n

Try:

\n\n
tf.image.decode_image(img, expand_animations = False) \n
\n\n

to make sure you have a tensor with a 3-dimensional shape.\nThis problem is due to gif format because decode_gif returns a 4-D array [num_frames, height, width, 3] as opposed to other formats including decode_bmp, decode_jpeg, and decode_png, which return 3-D arrays [height, width, num_channels].

\n\n

For more information check the related documentation

\n", + "system": "" + }, + { + "instruction": "Difference between tf.clip_by_value and tf.clip_by_global_norm for RNN's and how to decide max value to clip on?", + "input": "", + "output": "

TL;DR: use tf.clip_by_global_norm for gradient clipping, with "some high value" as max value.

\n

clip_by_value

\n

tf.clip_by_value clips each value inside one tensor, regardless of the other values in the tensor. For instance,

\n
tf.clip_by_value([-1, 2, 10], 0, 3)  -> [0, 2, 3]  # Only the values below 0 or above 3 are changed\n
\n

Consequently, it can change the direction of the tensor, so it should be used if the values in the tensor are decorrelated one from another (which is not the case for gradient clipping), or to avoid zero / infinite values in a tensor that could lead to Nan / infinite values elsewhere (by clipping with a minimum of epsilon=1e-8 and a very big max value for instance).

\n

clip_by_norm

\n

tf.clip_by_norm rescales one tensor if necessary, so that its L2 norm does not exceed a certain threshold. It's useful typically to avoid exploding gradient on one tensor, because you keep the gradient direction. For instance:

\n
tf.clip_by_norm([-2, 3, 6], 5)  -> [-2, 3, 6]*5/7  # The original L2 norm is 7, which is >5, so the final one is 5\ntf.clip_by_norm([-2, 3, 6], 9)  -> [-2, 3, 6]  # The original L2 norm is 7, which is <9, so it is left unchanged\n
\n

However, clip_by_norm works on only one gradient, so if you use it on all your gradient tensors, you'll unbalance them (some will be rescaled, others not, and not all with the same scale).

\n

Note that the two first ones work on only one tensor, while the last one is used on a list of tensors.

\n

clip_by_global_norm

\n

tf.clip_by_global_norm rescales a list of tensors so that the total norm of the vector of all their norms does not exceed a threshold. The goal is the same as clip_by_norm (avoid exploding gradient, keep the gradient directions), but it works on all the gradients at once rather than on each one separately (that is, all of them are rescaled by the same factor if necessary, or none of them are rescaled). This is better, because the balance between the different gradients is maintained.

\n

For instance:

\n
tf.clip_by_global_norm([tf.constant([-2, 3, 6]),tf.constant([-4, 6, 12])] , 14.5)\n
\n

will rescale both tensors by a factor 14.5/sqrt(49 + 196), because the first tensor has a L2 norm of 7, the second one 14, and sqrt(7^2+ 14^2)>14.5

\n

This (tf.clip_by_global_norm) is the one that you should use for gradient clipping. See this for instance for more information.

\n

Choosing the value

\n

Choosing the max value is the hardest part. You should use the biggest value such that you don't have exploding gradient (whose effects can be Nans or infinite values appearing in your tensors, constant loss /accuracy after a few training steps). The value should be bigger for tf.clip_by_global_norm than for the others, since the global L2 norm will be mechanically bigger than the other ones due to the number of tensors implied.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: How to feed a placeholder variable with a tensor?", + "input": "", + "output": "

This has been discussed on GitHub in 2016, and please check here. Here is the key point by concretevitamin:

\n\n
\n

One key thing to note is that Tensor is simply a symbolic object. The values of your feed_dict are the actual values, e.g. a Numpy ndarry.

\n
\n\n

The tensor as a symbolic object is flowing in the graph while the actual values are outside of it, then we can only pass the actual values into the graph and the symbolic object can not exist outside the graph.

\n", + "system": "" + }, + { + "instruction": "tensorflow record with float numpy array", + "input": "", + "output": "

FloatList and BytesList expect an iterable. So you need to pass it a list of floats. Remove the extra brackets in your _float_feature, ie

\n\n
def _floats_feature(value):\n  return tf.train.Feature(float_list=tf.train.FloatList(value=value))\n\nnumpy_arr = np.ones((3,)).astype(np.float)\nexample = tf.train.Example(features=tf.train.Features(feature={\"bytes\": _floats_feature(numpy_arr)}))\nprint(example)\n\nfeatures {\n  feature {\n    key: \"bytes\"\n    value {\n      float_list {\n        value: 1.0\n        value: 1.0\n        value: 1.0\n      }\n    }\n  }\n}\n
\n", + "system": "" + }, + { + "instruction": "Implementation difference between TensorFlow Variable and TensorFlow Tensor", + "input": "", + "output": "

Before explaining the distinction between tensors and variables, we should be precise about what the word \"tensor\" means in the context of TensorFlow:

\n\n\n\n

This distinction is a little confusing, and we might choose different names if we started over (in other language APIs, we prefer the name Output for a symbolic result and Tensor for a concrete value).

\n\n

A similar distinction exists for variables. In the Python API, a tf.Variable is the symbolic representation of a variable, which has methods for creating operations that read the current value of the variable, and assign values to it. In the C++ implementation, a tensorflow::Var object is a wrapper around a shared, mutable tensorflow::Tensor object.

\n\n

With that context out the way, we can address your specific questions:

\n\n
    \n
  1. What is the meaning of \"in-memory buffers\"?

    \n\n

    An in-memory buffer is simply a contiguous region of memory that has been allocated with a TensorFlow allocator. tensorflow::Tensor objects contain a pointer to an in-memory buffer, which holds the values of that tensor. The buffer could be in host memory (i.e. accessible from the CPU) or device memory (e.g. accessible only from a GPU), and TensorFlow has operations to move data between these memory spaces.

  2. \n
  3. What is the meaning of a \"handle\"?

    \n\n

    In the explanation in the paper, the word \"handle\" is used in a couple of different ways, which are slightly different from how TensorFlow uses the term. The paper uses \"symbolic handle\" to refer to a tf.Tensor object, and \"persistent, mutable handle\" to refer to a tf.Variable object. The TensorFlow codebase uses \"handle\" to refer to a name for a stateful object (like a tf.FIFOQueue or tf.TensorArray) that can be passed around without copying all of the values (i.e. call-by-reference).

  4. \n
  5. Is my initial assumption about the internal of a tensor correct?

    \n\n

    Your assumption most closely matches the definition of a (C++) tensorflow::Tensor object. The (Python) tf.Tensor object is more complicated because it refers to a function for computing a value, rather than the value itself.

  6. \n
  7. What is the essential internal implementation difference between a tensor and a variable?

    \n\n

    In C++, a tensorflow::Tensor and tensorflow::Var are very similar; the only different is that tensorflow::Var also has a mutex that can be used to lock the variable when it is being updated.

    \n\n

    In Python, the essential difference is that a tf.Tensor is implemented as a dataflow graph, and it is read-only (i.e. by calling tf.Session.run()). A tf.Variable can be both read (i.e. by evaluating its read operation) and written (e.g. by running an assign operation).

    \n\n

    Why are they declared differently and why is that difference essential to TensorFlow?

    \n\n

    Tensors and variables serve different purposes. Tensors (tf.Tensor objects) can represent complex compositions of mathematical expressions, like loss functions in a neural network, or symbolic gradients. Variables represent state that is updated over time, like weight matrices and convolutional filters during training. While in principle you could represent the evolving state of a model without variables, you would end up with a very large (and repetetive) mathematical expression, so variables provide a convenient way to materialize the state of the model, and—for example—share it with other machines for parallel training.

  8. \n
\n", + "system": "" + }, + { + "instruction": "Changing the scale of a tensor in tensorflow", + "input": "", + "output": "

You are trying to normalize the data. A classic normalization formula is this one:

\n
normalize_value = (value \u2212 min_value) / (max_value \u2212 min_value)\n
\n

The implementation on tensorflow will look like this:

\n
tensor = tf.div(\n   tf.subtract(\n      tensor, \n      tf.reduce_min(tensor)\n   ), \n   tf.subtract(\n      tf.reduce_max(tensor), \n      tf.reduce_min(tensor)\n   )\n)\n
\n

All the values of the tensor will be betweetn 0 and 1.

\n

IMPORTANT: make sure the tensor has float/double values, or the output tensor will have just zeros and ones. If you have a integer tensor call this first:

\n
tensor = tf.to_float(tensor)\n
\n

Update: as of tensorflow 2, tf.to_float() is deprecated and instead, tf.cast() should be used:

\n
tensor = tf.cast(tensor, dtype=tf.float32) # or any other tf.dtype, that is precise enough\n
\n", + "system": "" + }, + { + "instruction": "In Tensorflow, how to assign values in Tensor according to the indices?", + "input": "", + "output": "

There's a pending PR that should fix this:

\n\n

https://github.com/tensorflow/tensorflow/issues/1793

\n", + "system": "" + }, + { + "instruction": "What's difference between tf.sub and just minus operation in tensorflow?", + "input": "", + "output": "

Yes, - and + resolve to tf.sub ad tf.add. If you look at the tensorflow code you will see that these operators on tf.Variable are overloaded with the tf.* methods.

\n\n

As to why both exists I assume the tf.* ones exist for consistency. So sub and say matmul operation can be used in the same way. While the operator overloading is for convenience.

\n", + "system": "" + }, + { + "instruction": "Example for Deploying a Tensorflow Model via a RESTful API", + "input": "", + "output": "

https://github.com/sugyan/tensorflow-mnist shows a simple restAPI example by using Flask and loading pre-trained mode (restore).

\n\n
@app.route('/api/mnist', methods=['POST'])\ndef mnist():\n    input = ((255 - np.array(request.json, dtype=np.uint8)) / 255.0).reshape(1, 784)\n    output1 = simple(input)\n    output2 = convolutional(input)\n    return jsonify(results=[output1, output2])\n
\n\n

Also, see the online demo at https://tensorflow-mnist.herokuapp.com/. It seems the API is fast enough.

\n", + "system": "" + }, + { + "instruction": "TensorFlow object detection TF-TRT Warning: Could not find TensorRT", + "input": "", + "output": "

Try to install pip install tensorrt.

\n

Perhaps you need to read this> How do I install Python packages in Google's Colab?

\n

And check if a GPU is allocated with google colab.

\n", + "system": "" + }, + { + "instruction": "Cannot install TensorFlow 1.x", + "input": "", + "output": "

What I've found on discourse:

\n

You just need to make sure you\u2019re using Python 3.5, 3.6 or 3.7. TensorFlow 1.15 does not support Python 3.8

\n", + "system": "" + }, + { + "instruction": "UsageError: Line magic function `%tensorflow_version` not found", + "input": "", + "output": "

Jupyter notebook comes with a set of magic functions, but %tensorflow_version is not one of them. The magic command

\n\n
%tensorflow_version X.X\n
\n\n

is only available in Google Colab notebooks, not Jupyter notebooks.

\n", + "system": "" + }, + { + "instruction": "TF2.0: Translation model: Error when restoring the saved model: Unresolved object in checkpoint (root).optimizer.iter: attributes", + "input": "", + "output": "

It means you are not using all the checkpointed values you have restored.

\n\n

It happens because you are restoring a model that has training information (such as optimizer variables) but you are only using it for prediction, not training. When predicting, you don't need the saved optimizer values, which is why the program is telling you they were not used.

\n\n

If you were using this restored model for training on new data, this warning would disappear.

\n\n

You could silence these warning with model.load_weights(...).expect_partial() or tf.train.Checkpoint.restore(...).expect_partial().

\n\n

A better solution would be to only save the variables required for inference when training:

\n\n
saver = tf.train.Saver(tf.model_variables())\n
\n\n

tf.model_variables() is the subset of Variable objects that are used in the model for inference (see tensorflow doc).

\n", + "system": "" + }, + { + "instruction": "The name tf.Session is deprecated. Please use tf.compat.v1.Session instead", + "input": "", + "output": "

To make TensorFlow be more \"Pythonic\" in version 2.0, by design TF 2.0 does not have tf.Session.

\n\n

TensorFlow 1.X requires users to manually stitch together an abstract syntax tree (the graph) by making tf.* API calls. It then requires users to manually compile the abstract syntax tree by passing a set of output tensors and input tensors to a session.run() call.

\n\n

TensorFlow 2.0 executes eagerly (like Python normally does) and in 2.0, graphs and sessions should feel like implementation details.

\n\n

You could use:

\n\n
import tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\n
\n\n

However, this does not let you take advantage of many of the improvements made in TensorFlow 2.0.

\n\n

The better solution is:

\n\n\n", + "system": "" + }, + { + "instruction": "Why is TensorFlow's `tf.data` package slowing down my code?", + "input": "", + "output": "

I wanted to test the dataset API which seems to be really convenient for processing data. I did a lot of time testing about this API in CPU, GPU and multi-GPU way for small and large NN with different type of data.

\n\n

First thing, It seems to me that your code is ok. But I need to point that your NN is just one simple layer.

\n\n

Now, the dataset API is not suitable for your type of NN but for NN with a lot more complexity. Why ? For several reasons that I explain below (founded in my quest of understanding the dataset API).

\n\n

Firstly, in one hand the dataset API processes data each batch whereas in the other hand data are preprocessed. Therefore, if it fits your RAM, you can save time by preprocessing the data. Here your data are just to \"simple\". If you want to test what i am saying, try to find a really really big dataset to process. Nevertheless, the dataset API can be tuned with prefetching data. You can take a look to this tutorial that explain really well why it is good to process data with prefetch.

\n\n

Secondly, in my quest of dataset API for Multi-GPU training, I discovered that as far as i know the old pre-processing way is faster than dataset API for small Neural Network. You can verify that by creating a simple stackable RNN which take a sequence in input. You can try different size of stack (i have tested 1, 2, 10 and 20). You will see that, using the dataset API, on 1-GPU or on 4-GPUs, the time did not differ for small RNN stacks (1, 2 and 5).

\n\n

To summarize, the dataset API is suitable for Neural Network that have data that can't be pre-process. Depending on your task, it may be more convenient to pre-process data, for example if you want to tweak your NN in order to improve it. I agree that the dataset API is really cool for batch, padding and also convenient for shuffling large amount of data but it's also not suitable for multi-GPU training.

\n", + "system": "" + }, + { + "instruction": "How to create mask images from COCO dataset?", + "input": "", + "output": "

The complete code wasn't in the answer so I post it below.

\n

Please install pycocotools first.

\n
pip install pycocotools\n
\n

Import the required modules. I'm assuming you're using a jupyter notebook.

\n
from pycocotools.coco import COCO\nimport os\nfrom PIL import Image\nimport numpy as np\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n
\n

Load the annotations for the coco dataset. Here, specify the 74 image.

\n
coco = COCO('../datasets/coco/annotations/instances_train2017.json')\nimg_dir = '../datasets/coco/train2017'\nimage_id = 74\n\nimg = coco.imgs[image_id]\n# loading annotations into memory...\n# Done (t=12.70s)\n# creating index...\n# index created!\n
\n

The information of the loaded img is as follows.

\n
img\n# {'license': 2,\n#  'file_name': '000000000074.jpg',\n#  'coco_url': # 'http://images.cocodataset.org/train2017/000000000074.jpg',\n#  'height': 426,\n#  'width': 640,\n#  'date_captured': '2013-11-15 03:08:44',\n#  'flickr_url': # 'http://farm5.staticflickr.com/4087/5078192399_aaefdb5074_z.jpg# ',\n#  'id': 74}\n
\n

Display the image as follows.

\n
image = np.array(Image.open(os.path.join(img_dir, img['file_name'])))\nplt.imshow(image, interpolation='nearest')\nplt.show()\n
\n

\"enter

\n

If you want to see the overlay result:

\n
plt.imshow(image)\ncat_ids = coco.getCatIds()\nanns_ids = coco.getAnnIds(imgIds=img['id'], catIds=cat_ids, iscrowd=None)\nanns = coco.loadAnns(anns_ids)\ncoco.showAnns(anns)\n
\n

\"enter

\n

If you just want to see the mask, as Farshid Rayhan replied, do the following:

\n
mask = coco.annToMask(anns[0])\nfor i in range(len(anns)):\n    mask += coco.annToMask(anns[i])\n\nplt.imshow(mask)\n
\n

\"enter

\n", + "system": "" + }, + { + "instruction": "Tensorflow: When should I use or not use `feed_dict`?", + "input": "", + "output": "

In a tensorflow model you can define a placeholder such as x = tf.placeholder(tf.float32), then you will use x in your model.

\n\n

For example, I define a simple set of operations as:

\n\n
x = tf.placeholder(tf.float32)\ny = x * 42\n
\n\n

Now when I ask tensorflow to compute y, it's clear that y depends on x.

\n\n
with tf.Session() as sess:\n  sess.run(y)\n
\n\n

This will produce an error because I did not give it a value for x. In this case, because x is a placeholder, if it gets used in a computation you must pass it in via feed_dict. If you don't it's an error.

\n\n

Let's fix that:

\n\n
with tf.Session() as sess:\n  sess.run(y, feed_dict={x: 2})\n
\n\n

The result this time will be 84. Great. Now let's look at a trivial case where feed_dict is not needed:

\n\n
x = tf.constant(2)\ny = x * 42\n
\n\n

Now there are no placeholders (x is a constant) and so nothing needs to be fed to the model. This works now:

\n\n
with tf.Session() as sess:\n  sess.run(y)\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow - tf.data.Dataset reading large HDF5 files", + "input": "", + "output": "

I stumbled across this question while dealing with a similar issue. I came up with a solution based on using a Python generator, together with the TF dataset construction method from_generator. Because we use a generator, the HDF5 file should be opened for reading only once and kept open as long as there are entries to read. So it will not be opened, read, and then closed for every single call to get the next data element.

\n\n

Generator definition

\n\n

To allow the user to pass in the HDF5 filename as an argument, I generated a class that has a __call__ method since from_generator specifies that the generator has to be callable. This is the generator:

\n\n
import h5py\nimport tensorflow as tf\n\nclass generator:\n    def __init__(self, file):\n        self.file = file\n\n    def __call__(self):\n        with h5py.File(self.file, 'r') as hf:\n            for im in hf[\"train_img\"]:\n                yield im\n
\n\n

By using a generator, the code should pick up from where it left off at each call from the last time it returned a result, instead of running everything from the beginning again. In this case it is on the next iteration of the inner for loop. So this should skip opening the file again for reading, keeping it open as long as there is data to yield. For more on generators, see this excellent Q&A.

\n\n

Of course, you will have to replace anything inside the with block to match how your dataset is constructed and what outputs you want to obtain.

\n\n

Usage example

\n\n
ds = tf.data.Dataset.from_generator(\n    generator(hdf5_path), \n    tf.uint8, \n    tf.TensorShape([427,561,3]))\n\nvalue = ds.make_one_shot_iterator().get_next()\n\n# Example on how to read elements\nwhile True:\n    try:\n        data = sess.run(value)\n        print(data.shape)\n    except tf.errors.OutOfRangeError:\n        print('done.')\n        break\n
\n\n

Again, in my case I had stored uint8 images of height 427, width 561, and 3 color channels in my dataset, so you will need to modify these in the above call to match your use case.

\n\n

Handling multiple files

\n\n

I have a proposed solution for handling multiple HDF5 files. The basic idea is to construct a Dataset from the filenames as usual, and then use the interleave method to process many input files concurrently, getting samples from each of them to form a batch, for example.

\n\n

The idea is as follows:

\n\n
ds = tf.data.Dataset.from_tensor_slices(filenames)\n# You might want to shuffle() the filenames here depending on the application\nds = ds.interleave(lambda filename: tf.data.Dataset.from_generator(\n        generator(filename), \n        tf.uint8, \n        tf.TensorShape([427,561,3])),\n       cycle_length, block_length)\n
\n\n

What this does is open cycle_length files concurrently, and produce block_length items from each before moving to the next file - see interleave documentation for details. You can set the values here to match what is appropriate for your application: e.g., do you need to process one file at a time or several concurrently, do you only want to have a single sample at a time from each file, and so on.

\n\n

Edit: for a parallel version, take a look at tf.contrib.data.parallel_interleave!

\n\n

Possible caveats

\n\n

Be aware of the peculiarities of using from_generator if you decide to go with the solution. For Tensorflow 1.6.0, the documentation of from_generator mentions these two notes.

\n\n

It may be challenging to apply this across different environments or with distributed training:

\n\n
\n

NOTE: The current implementation of Dataset.from_generator() uses\n tf.py_func and inherits the same constraints. In particular, it\n requires the Dataset- and Iterator-related operations to be placed on\n a device in the same process as the Python program that called\n Dataset.from_generator(). The body of generator will not be serialized\n in a GraphDef, and you should not use this method if you need to\n serialize your model and restore it in a different environment.

\n
\n\n

Be careful if the generator depends on external state:

\n\n
\n

NOTE: If generator depends on mutable global variables or other\n external state, be aware that the runtime may invoke generator\n multiple times (in order to support repeating the Dataset) and at any\n time between the call to Dataset.from_generator() and the production\n of the first element from the generator. Mutating global variables or\n external state can cause undefined behavior, and we recommend that you\n explicitly cache any external state in generator before calling\n Dataset.from_generator().

\n
\n", + "system": "" + }, + { + "instruction": "Uninstalling TensorFlow from Anaconda environment", + "input": "", + "output": "

You can remove a package with the conda remove command. So for TensorFlow this would be conda remove tensorflow.

\n", + "system": "" + }, + { + "instruction": "What is right batch normalization function in Tensorflow?", + "input": "", + "output": "

Just to add to the list, there're several more ways to do batch-norm in tensorflow:

\n\n\n", + "system": "" + }, + { + "instruction": "DQN - Q-Loss not converging", + "input": "", + "output": "

Yes, the loss must coverage, because of the loss value means the difference between expected Q value and current Q value. Only when loss value converges, the current approaches optimal Q value. If it diverges, this means your approximation value is less and less accurate.

\n\n

Maybe you can try adjusting the update frequency of the target network or check the gradient of each update (add gradient clipping). The addition of the target network increases the stability of the Q-learning.

\n\n

In Deepmind's 2015 Nature paper, it states that:

\n\n
\n

The second modification to online Q-learning aimed at further improving the stability of our method with neural networks is to use a separate network for generating the traget yj in the Q-learning update. More precisely, every C updates we clone the network Q to obtain a target network Q' and use Q' for generating the Q-learning targets yj for the following C updates to Q. \n This modification makes the algorithm more stable compared to standard online Q-learning, where an update that increases Q(st,at) often also increases Q(st+1, a) for all a and hence also increases the target yj, possibly leading to oscillations or divergence of the policy. Generating the targets using the older set of parameters adds a delay between the time an update to Q is made and the time the update affects the targets yj, making divergence or oscillations much more unlikely.

\n
\n\n

Human-level control through deep reinforcement\nlearning, Mnih et al., 2015

\n\n

I've made an experiment for another person asked similar questions in the Cartpole environment, and the update frequency of 100 solves the problem (achieve a maximum of 200 steps).

\n\n

When C (update frequency) = 2, Plotting of the average loss:\n\"C=2\"

\n\n

C = 10

\n\n

\"C=10\"

\n\n

C = 100

\n\n

\"enter

\n\n

C = 1000

\n\n

\"enter

\n\n

C = 10000

\n\n

\"enter

\n\n

If the divergence of loss value is caused by gradient explode, you can clip the gradient. In Deepmind's 2015 DQN, the author clipped the gradient by limiting the value within [-1, 1]. In the other case, the author of Prioritized Experience Replay clip gradient by limiting the norm within 10. Here're the examples:

\n\n

DQN gradient clipping:

\n\n
    optimizer.zero_grad()\n    loss.backward()\n    for param in model.parameters():\n        param.grad.data.clamp_(-1, 1)\n    optimizer.step()\n
\n\n

PER gradient clipping:

\n\n
    optimizer.zero_grad()\n    loss.backward()\n    if self.grad_norm_clipping:\n       torch.nn.utils.clip_grad.clip_grad_norm_(self.model.parameters(), 10)\n   optimizer.step()\n
\n", + "system": "" + }, + { + "instruction": "How to accumulate gradients in tensorflow?", + "input": "", + "output": "

Let's walk through the code proposed in one of the answers you linked to:

\n
## Optimizer definition - nothing different from any classical example\nopt = tf.train.AdamOptimizer()\n\n## Retrieve all trainable variables you defined in your graph\ntvs = tf.trainable_variables()\n## Creation of a list of variables with the same shape as the trainable ones\n# initialized with 0s\naccum_vars = [tf.Variable(tf.zeros_like(tv.initialized_value()), trainable=False) for tv in tvs]\nzero_ops = [tv.assign(tf.zeros_like(tv)) for tv in accum_vars]\n\n## Calls the compute_gradients function of the optimizer to obtain... the list of gradients\ngvs = opt.compute_gradients(rmse, tvs)\n\n## Adds to each element from the list you initialized earlier with zeros its gradient (works because accum_vars and gvs are in the same order)\naccum_ops = [accum_vars[i].assign_add(gv[0]) for i, gv in enumerate(gvs)]\n\n## Define the training step (part with variable value update)\ntrain_step = opt.apply_gradients([(accum_vars[i], gv[1]) for i, gv in enumerate(gvs)])\n
\n

This first part basically adds new variables and ops to your graph which will allow you to

\n
    \n
  1. Accumulate the gradient with ops accum_ops in (the list of) variable accum_vars
  2. \n
  3. Update the model weights with ops train_step
  4. \n
\n

Then, to use it when training, you have to follow these steps (still from the answer you linked):

\n
## The while loop for training\nwhile ...:\n    # Run the zero_ops to initialize it\n    sess.run(zero_ops)\n    # Accumulate the gradients 'n_minibatches' times in accum_vars using accum_ops\n    for i in xrange(n_minibatches):\n        sess.run(accum_ops, feed_dict=dict(X: Xs[i], y: ys[i]))\n    # Run the train_step ops to update the weights based on your accumulated gradients\n    sess.run(train_step)\n
\n", + "system": "" + }, + { + "instruction": "tensorflow for poets: "The name 'import/input' refers to an Operation not in the graph."", + "input": "", + "output": "

I changed ~/scripts/label_image.py line 77 and it works:

\n\n

from

\n\n
input_layer = \"input\"\n
\n\n

to

\n\n
input_layer = \"Mul\"\n
\n", + "system": "" + }, + { + "instruction": "Use LSTM tutorial code to predict next word in a sentence?", + "input": "", + "output": "
\n

My biggest question is how do I use the produced model to actually generate a next word suggestion, given the first few words of a sentence?

\n

I.e. I'm trying to write a function with the signature: getNextWord(model, sentencePrefix)

\n
\n

Before I explain my answer, first a remark about your suggestion to # Call static_rnn(cell) once for each word in prefix to initialize state: Keep in mind that static_rnn does not return a value like a numpy array, but a tensor. You can evaluate a tensor to a value when it is run (1) in a session (a session is keeps the state of your computional graph, including the values of your model parameters) and (2) with the input that is necessary to calculate the tensor value. Input can be supplied using input readers (the approach in the tutorial), or using placeholders (what I will use below).

\n

Now follows the actual answer:\nThe model in the tutorial was designed to read input data from a file. The answer of @user3080953 already showed how to work with your own text file, but as I understand it you need more control over how the data is fed to the model. To do this you will need to define your own placeholders and feed the data to these placeholders when calling session.run().

\n

In the code below I subclassed PTBModel and made it responsible for explicitly feeding data to the model. I introduced a special PTBInteractiveInput that has an interface similar to PTBInput so you can reuse the functionality in PTBModel. To train your model you still need PTBModel.

\n
class PTBInteractiveInput(object):\n  def __init__(self, config):\n    self.batch_size = 1\n    self.num_steps = config.num_steps\n    self.input_data = tf.placeholder(dtype=tf.int32, shape=[self.batch_size, self.num_steps])\n    self.sequence_len = tf.placeholder(dtype=tf.int32, shape=[])\n    self.targets = tf.placeholder(dtype=tf.int32, shape=[self.batch_size, self.num_steps])\n\nclass InteractivePTBModel(PTBModel):\n\n  def __init__(self, config):\n    input = PTBInteractiveInput(config)\n    PTBModel.__init__(self, is_training=False, config=config, input_=input)\n    output = self.logits[:, self._input.sequence_len - 1, :]\n    self.top_word_id = tf.argmax(output, axis=2)\n\n  def get_next(self, session, prefix):\n    prefix_array, sequence_len = self._preprocess(prefix)\n    feeds = {\n      self._input.sequence_len: sequence_len,\n      self._input.input_data: prefix_array,\n    }\n    fetches = [self.top_word_id]\n    result = session.run(fetches, feeds)\n    self._postprocess(result)\n\n  def _preprocess(self, prefix):\n    num_steps = self._input.num_steps\n    seq_len = len(prefix)\n    if seq_len > num_steps:\n      raise ValueError("Prefix to large for model.")\n    prefix_ids = self._prefix_to_ids(prefix)\n    num_items_to_pad = num_steps - seq_len\n    prefix_ids.extend([0] * num_items_to_pad)\n    prefix_array = np.array([prefix_ids], dtype=np.float32)\n    return prefix_array, seq_len\n\n  def _prefix_to_ids(self, prefix):\n    # should convert your prefix to a list of ids\n    pass\n\n  def _postprocess(self, result):\n    # convert ids back to strings\n    pass\n
\n

In the __init__ function of PTBModel you need to add this line:

\n
self.logits = logits\n
\n
\n

Why use a random (uninitialized, untrained) word-embedding?

\n
\n

First note that, although the embeddings are random in the beginning, they will be trained with the rest of the network. The embeddings you obtain after training will have similar properties than the embeddings you obtain with word2vec models, e.g., the ability to answer analogy questions with vector operations (king - man + woman = queen, etc.) In tasks were you have a considerable amount of training data like language modelling (which does not need annotated training data) or neural machine translation, it is more common to train embeddings from scratch.

\n
\n

Why use softmax?

\n
\n

Softmax is a function that normalizes a vector of similarity scores (the logits), to a probability distribution. You need a probability distribution to train you model with cross-entropy loss and to be able to sample from the model. Note that if you are only interested in the most likely words of a trained model, you don't need the softmax and you can use the logits directly.

\n
\n

Does the hidden layer have to match the dimension of the input (i.e. the dimension of the word2vec embeddings)

\n
\n

No, in principal it can be any value. Using a hidden state with a lower dimension than your embedding dimension, does not make much sense, however.

\n
\n

How/Can I bring in a pre-trained word2vec model, instead of that uninitialized one?

\n
\n

Here is a self-contained example of initializing an embedding with a given numpy array. If you want that the embedding remains fixed/constant during training, set trainable to False.

\n
import tensorflow as tf\nimport numpy as np\nvocab_size = 10000\nsize = 200\ntrainable=True\nembedding_matrix = np.zeros([vocab_size, size]) # replace this with code to load your pretrained embedding\nembedding = tf.get_variable("embedding",\n                            initializer=tf.constant_initializer(embedding_matrix),\n                            shape=[vocab_size, size],\n                            dtype=tf.float32,\n                            trainable=trainable)\n
\n", + "system": "" + }, + { + "instruction": "Printing extra training metrics with Tensorflow Estimator", + "input": "", + "output": "

From what I've read it is not possible to change it by passing parameter.\nYou can try to do by creating a logging hook and passing it into to estimator run.

\n

In the body of model_fn function for your estimator:

\n
logging_hook = tf.train.LoggingTensorHook({"loss" : loss, \n    "accuracy" : accuracy}, every_n_iter=10)\n\n# Rest of the function\n\nreturn tf.estimator.EstimatorSpec(\n    ...params...\n    training_hooks = [logging_hook])\n
\n

EDIT:

\n

To see the output you must also set logging verbosity high enough (unless its your default):\ntf.logging.set_verbosity(tf.logging.INFO)

\n", + "system": "" + }, + { + "instruction": "what does arg_scope actually do?", + "input": "", + "output": "

When defining convolution layers, you may always use the same padding type and the same initializer, and maybe even the same convolution size. For you pooling, maybe you are also always using the same 2x2 pooling size. And so on.

\n\n

arg_scope is a way to avoid repeating providing the same arguments over and over again to the same layer types.

\n\n

Examples from the source documentation:

\n\n
\n

Example of how to use tf.contrib.framework.arg_scope:

\n\n
from third_party.tensorflow.contrib.layers.python import layers\n  arg_scope = tf.contrib.framework.arg_scope\n  with arg_scope([layers.conv2d], padding='SAME',\n                 initializer=layers.variance_scaling_initializer(),\n                 regularizer=layers.l2_regularizer(0.05)):\n    net = layers.conv2d(inputs, 64, [11, 11], 4, padding='VALID', scope='conv1')\n    net = layers.conv2d(net, 256, [5, 5], scope='conv2')\n
\n \n

The first call to conv2d will behave as follows:

\n\n
   layers.conv2d(inputs, 64, [11, 11], 4, padding='VALID',\n                  initializer=layers.variance_scaling_initializer(),\n                  regularizer=layers.l2_regularizer(0.05), scope='conv1')\n
\n \n

The second call to conv2d will also use the arg_scope's default for padding:

\n\n
  layers.conv2d(inputs, 256, [5, 5], padding='SAME',\n                  initializer=layers.variance_scaling_initializer(),\n                  regularizer=layers.l2_regularizer(0.05), scope='conv2')\n
\n \n

Example of how to reuse an arg_scope:

\n\n
with arg_scope([layers.conv2d], padding='SAME',\n                 initializer=layers.variance_scaling_initializer(),\n                 regularizer=layers.l2_regularizer(0.05)) as sc:\n    net = layers.conv2d(net, 256, [5, 5], scope='conv1')\n    ....\n  with arg_scope(sc):\n    net = layers.conv2d(net, 256, [5, 5], scope='conv2')\n
\n
\n", + "system": "" + }, + { + "instruction": "How to add new embeddings for unknown words in Tensorflow (training & pre-set for testing)", + "input": "", + "output": "

The code example below adapts your embed_tensor function such that words are embedded as follows:

\n\n\n\n
import tensorflow as tf\nimport numpy as np\n\nEMB_DIM = 300\ndef load_pretrained_glove():\n    return [\"a\", \"cat\", \"sat\", \"on\", \"the\", \"mat\"], np.random.rand(6, EMB_DIM)\n\ndef get_train_vocab():\n    return [\"a\", \"dog\", \"sat\", \"on\", \"the\", \"mat\"]\n\ndef embed_tensor(string_tensor, trainable=True):\n  \"\"\"\n  Convert List of strings into list of indices then into 300d vectors\n  \"\"\"\n  # ordered lists of vocab and corresponding (by index) 300d vector\n  pretrained_vocab, pretrained_embs = load_pretrained_glove()\n  train_vocab = get_train_vocab()\n  only_in_train = list(set(train_vocab) - set(pretrained_vocab))\n  vocab = pretrained_vocab + only_in_train\n\n  # Set up tensorflow look up from string word to unique integer\n  vocab_lookup = tf.contrib.lookup.index_table_from_tensor(\n    mapping=tf.constant(vocab),\n    default_value=len(vocab))\n  string_tensor = vocab_lookup.lookup(string_tensor)\n\n  # define the word embedding\n  pretrained_embs = tf.get_variable(\n      name=\"embs_pretrained\",\n      initializer=tf.constant_initializer(np.asarray(pretrained_embs), dtype=tf.float32),\n      shape=pretrained_embs.shape,\n      trainable=trainable)\n  train_embeddings = tf.get_variable(\n      name=\"embs_only_in_train\",\n      shape=[len(only_in_train), EMB_DIM],\n      initializer=tf.random_uniform_initializer(-0.04, 0.04),\n      trainable=trainable)\n  unk_embedding = tf.get_variable(\n      name=\"unk_embedding\",\n      shape=[1, EMB_DIM],\n      initializer=tf.random_uniform_initializer(-0.04, 0.04),\n      trainable=False)\n\n  embeddings = tf.concat([pretrained_embs, train_embeddings, unk_embedding], axis=0)\n\n  return tf.nn.embedding_lookup(embeddings, string_tensor)\n
\n\n

FYI, to have a sensible, non-random representation for words that don't occur in the training data and don't have a pretrained embedding, you could consider mapping words with a low frequency in your training data to an unk token (that is not in your vocabulary) and make the unk_embedding trainable. This way you learn a prototype for words that are unseen in the training data.

\n", + "system": "" + }, + { + "instruction": "TensorFlow Object Detection API Weird Behavior", + "input": "", + "output": "

So I think I figured out what is going on. I did some analysis on the dataset and found out that it is skewed towards objects of category 1.

\n\n

This is the frequency distribution of the each category from 1 to 11 (in 0 based indexing)

\n\n
0 10440\n1 304\n2 998\n3 67\n4 412\n5 114\n6 190\n7 311\n8 195\n9 78\n10 75\n
\n\n

I guess the model is hitting a local minima where just labelling everything as category 1 is good enough.

\n\n

About the problem of not detecting some boxes : I tried training again, but this time I didn't differentiate between brands. Instead, I tried to teach the model what a cigarette box is. It still wasn't detecting all the boxes.

\n\n

Then I decided to crop the input image and provide that as an input. Just to see if the results improve and it did!

\n\n

It turns out that the dimensions of the input image were much larger than the 600 x 1024 that is accepted by the model. So, it was scaling down these images to 600 x 1024 which meant that the cigarette boxes were losing their details :)

\n\n

So, I decided to test the original model which was trained on all classes on cropped images and it works like a charm :)

\n\n

\"Original

\n\n

This was the output of the model on the original image

\n\n

\"Top

\n\n

This is the output of the model when I crop out the top left quarter and provide it as input.

\n\n

Thanks everyone who helped! And congrats to the TensorFlow team for an amazing job for the API :) Now everybody can train object detection models!

\n", + "system": "" + }, + { + "instruction": "Running trained tensorflow model in C++", + "input": "", + "output": "

Instructions for using a graph in C++ can be found here.

\n\n

Here is some code to use your image as input:

\n\n
tensorflow::Tensor keep_prob = tensorflow::Tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape());\nkeep_prob.scalar<float>()() = 1.0;\n\ntensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({1,height,width,depth}));\nauto input_tensor_mapped = input_tensor.tensor<float, 4>();\nconst float * source_data = (float*) img.data;  // here img is an opencv image, but if it's just a float array this code is very easy to adapt\n// copying the image data into the corresponding tensor\nfor (int y = 0; y < height; ++y) {\n    const float* source_row = source_data + (y * width * depth);\n    for (int x = 0; x < width; ++x) {\n        const float* source_pixel = source_row + (x * depth);\n        for (int c = 0; c < depth; ++c) {\n            const float* source_value = source_pixel + c;\n            input_tensor_mapped(0, y, x, c) = *source_value;\n        }\n    }\n}\nstd::vector<tensorflow::Tensor> finalOutput;\n\ntensorflow::Status run_status = this->tf_session->Run({{InputName,input_tensor}, \n                                                       {dropoutPlaceHolderName, keep_prob}},\n                                                      {OutputName},\n                                                      {},\n                                                      &finalOutput);\n
\n", + "system": "" + }, + { + "instruction": "can't open tensorboard 0.0.0.0:6006 or localhost:6006", + "input": "", + "output": "

Refer to tensorflow issue#9701.

\n\n

Run tensorboard --logdir=YOUR_LOG_DIR --host=127.0.0.1 in command prompt,\nand type localhost:6006 in chrome, this works for me (Win10, anaconda4.3.16, python3.5.3, tensorflow1.1.0).

\n", + "system": "" + }, + { + "instruction": "TensorFlow Inference", + "input": "", + "output": "

Alright, this took way too much time to figure out; so here is the answer for the rest of the world.

\n\n

Quick Reminder: I needed to persist a model that can be dynamically loaded and inferred against without knowledge as to the under pinnings or insides of how it works.

\n\n

Step 1: Create a model as a Class and ideally use an interface definition

\n\n
class Vgg3Model:\n\n    NUM_DENSE_NEURONS = 50\n    DENSE_RESHAPE = 32 * (CONSTANTS.IMAGE_SHAPE[0] // 2) * (CONSTANTS.IMAGE_SHAPE[1] // 2)\n\n    def inference(self, images):\n        '''\n        Portion of the compute graph that takes an input and converts it into a Y output\n        '''\n        with tf.variable_scope('Conv1') as scope:\n            C_1_1 = ld.cnn_layer(images, (5, 5, 3, 32), (1, 1, 1, 1), scope, name_postfix='1')\n            C_1_2 = ld.cnn_layer(C_1_1, (5, 5, 32, 32), (1, 1, 1, 1), scope, name_postfix='2')\n            P_1 = ld.pool_layer(C_1_2, (1, 2, 2, 1), (1, 2, 2, 1), scope)\n        with tf.variable_scope('Dense1') as scope:\n            P_1 = tf.reshape(P_1, (-1, self.DENSE_RESHAPE))\n            dim = P_1.get_shape()[1].value\n            D_1 = ld.mlp_layer(P_1, dim, self.NUM_DENSE_NEURONS, scope, act_func=tf.nn.relu)\n        with tf.variable_scope('Dense2') as scope:\n            D_2 = ld.mlp_layer(D_1, self.NUM_DENSE_NEURONS, CONSTANTS.NUM_CLASSES, scope)\n        H = tf.nn.softmax(D_2, name='prediction')\n        return H\n\n    def loss(self, logits, labels):\n        '''\n        Adds Loss to all variables\n        '''\n        cross_entr = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels)\n        cross_entr = tf.reduce_mean(cross_entr)\n        tf.summary.scalar('cost', cross_entr)\n        tf.add_to_collection('losses', cross_entr)\n        return tf.add_n(tf.get_collection('losses'), name='total_loss')\n
\n\n

Step 2: Train your network with whatever inputs you want; in my case I used Queue Runners and TF Records. Note that this step is done by a different team which iterates, builds, designs and optimizes models. This can also change over time. The output they produce must be able to be pulled from a remote location so we can dynamically load the updated models on devices (reflashing hardware is a pain especially if it is geographically distributed). In this instance; the team drops the 3 files associated with a graph saver; but also a pickle of the model used for that training session

\n\n
model = vgg3.Vgg3Model()\n\ndef create_sess_ops():\n    '''\n    Creates and returns operations needed for running\n    a tensorflow training session\n    '''\n    GRAPH = tf.Graph()\n    with GRAPH.as_default():\n        examples, labels = Inputs.read_inputs(CONSTANTS.RecordPaths,\n                                          batch_size=CONSTANTS.BATCH_SIZE,\n                                          img_shape=CONSTANTS.IMAGE_SHAPE,\n                                          num_threads=CONSTANTS.INPUT_PIPELINE_THREADS)\n        examples = tf.reshape(examples, [-1, CONSTANTS.IMAGE_SHAPE[0],\n                                     CONSTANTS.IMAGE_SHAPE[1], CONSTANTS.IMAGE_SHAPE[2]], name='infer/input')\n        logits = model.inference(examples)\n        loss = model.loss(logits, labels)\n        OPTIMIZER = tf.train.AdamOptimizer(CONSTANTS.LEARNING_RATE)\n        gradients = OPTIMIZER.compute_gradients(loss)\n        apply_gradient_op = OPTIMIZER.apply_gradients(gradients)\n        gradients_summary(gradients)\n        summaries_op = tf.summary.merge_all()\n        return [apply_gradient_op, summaries_op, loss, logits], GRAPH\n\ndef main():\n    '''\n    Run and Train CIFAR 10\n    '''\n    print('starting...')\n    ops, GRAPH = create_sess_ops()\n    total_duration = 0.0\n    with tf.Session(graph=GRAPH) as SESSION:\n        COORDINATOR = tf.train.Coordinator()\n        THREADS = tf.train.start_queue_runners(SESSION, COORDINATOR)\n        SESSION.run(tf.global_variables_initializer())\n        SUMMARY_WRITER = tf.summary.FileWriter('Tensorboard/' + CONSTANTS.MODEL_NAME, graph=GRAPH)\n        GRAPH_SAVER = tf.train.Saver()\n\n        for EPOCH in range(CONSTANTS.EPOCHS):\n            duration = 0\n            error = 0.0\n            start_time = time.time()\n            for batch in range(CONSTANTS.MINI_BATCHES):\n                _, summaries, cost_val, prediction = SESSION.run(ops)\n                error += cost_val\n            duration += time.time() - start_time\n            total_duration += duration\n            SUMMARY_WRITER.add_summary(summaries, EPOCH)\n            print('Epoch %d: loss = %.2f (%.3f sec)' % (EPOCH, error, duration))\n            if EPOCH == CONSTANTS.EPOCHS - 1 or error < 0.005:\n                print(\n                'Done training for %d epochs. (%.3f sec)' % (EPOCH, total_duration)\n            )\n                break\n        GRAPH_SAVER.save(SESSION, 'models/' + CONSTANTS.MODEL_NAME + '.model')\n        with open('models/' + CONSTANTS.MODEL_NAME + '.pkl', 'wb') as output:\n            pickle.dump(model, output)\n        COORDINATOR.request_stop()\n        COORDINATOR.join(THREADS)\n
\n\n

Step 3: Run some Inference. Load your pickled model; create a new graph by piping in the new placeholder to the logits; and then call session restore. DO NOT RESTORE THE WHOLE GRAPH; JUST THE VARIABLES.

\n\n
MODEL_PATH = 'models/' + CONSTANTS.MODEL_NAME + '.model'\nimgs_bsdir = 'C:/data/cifar_10/train/'\n\nimages = tf.placeholder(tf.float32, shape=(1, 32, 32, 3))\nwith open('models/vgg3.pkl', 'rb') as model_in:\nmodel = pickle.load(model_in)\nlogits = model.inference(images)\n\ndef run_inference():\n    '''Runs inference against a loaded model'''\n    with tf.Session() as sess:\n        sess.run(tf.global_variables_initializer())\n        new_saver = tf.train.Saver()\n        new_saver.restore(sess, MODEL_PATH)\n        print(\"Starting...\")\n        for i in range(20, 30):\n            print(str(i) + '.png')\n            img = misc.imread(imgs_bsdir + str(i) + '.png').astype(np.float32) / 255.0\n            img = img.reshape(1, 32, 32, 3)\n            pred = sess.run(logits, feed_dict={images : img})\n            max_node = np.argmax(pred)\n            print('predicted label: ' + str(max_node))\n        print('done')\n\nrun_inference()\n
\n\n

There definitely ways to improve on this using interfaces and maybe packaging up everything better; but this is working and sets the stage for how we will be moving forward.

\n\n

FINAL NOTE When we finally pushed this to production, we ended up having to ship the stupid `mymodel_model.py file down with everything to build up the graph. So we now enforce a naming convention for all models and there is also a coding standard for production model runs so we can do this properly.

\n\n

Good Luck!

\n", + "system": "" + }, + { + "instruction": "Keras - class_weight vs sample_weights in the fit_generator", + "input": "", + "output": "
\n

My understanding is that the class_weight regards the weights of all\n classes for the entire dataset whereas the sample_weights regards the\n weights of all classes for each individual chunk created by the\n generator. Is that correct? If not, can someone elaborate on the\n matter?

\n
\n\n

class_weight affects the relative weight of each class in the calculation of the objective function. sample_weights, as the name suggests, allows further control of the relative weight of samples that belong to the same class.

\n\n
\n

Is it necessary to give both the class_weight to the fit_generator and\n then the sample_weights as an output for each chunk? If yes, then why?\n If not then which one is better to give?

\n
\n\n

It depends on your application. Class weights are useful when training on highly skewed data sets; for example, a classifier to detect fraudulent transactions. Sample weights are useful when you don't have equal confidence in the samples in your batch. A common example is performing regression on measurements with variable uncertainty.

\n\n
\n

If I should give the sample_weights for each chunk, how do I map the\n weights if some of the classes are missing from a specific chunk? Let\n me give an example. In my overall dataset, I have 7 possible classes\n (labels). Because these classes are highly imbalanced, when I create\n smaller chunks of data as an output from the fit_generator, some of\n the classes are missing from the specific chunk. How should I create\n the sample_weights for these chunks?

\n
\n\n

This is not an issue. sample_weights is defined on a per-sample basis and is independent from the class. For this reason, the documentation states that (inputs, targets, sample_weights) should be the same length.

\n\n

The function _weighted_masked_objective in engine/training.py has an example of sample_weights are being applied.

\n", + "system": "" + }, + { + "instruction": "Installation of TensorFlow on windows 7 - 'pip3' is not recognized as an internal or external command,", + "input": "", + "output": "

Run the following

\n\n
python -m pip install --upgrade tensorflow\n
\n\n

Assuming python is working, TensorFlow should get installed (at least the \"Validate the installation\" step is green).

\n", + "system": "" + }, + { + "instruction": "Tensorflow OOM on GPU", + "input": "", + "output": "

I resolve this issue by reducing batch_size=52\nOnly to reduce memory use is to reduce batch_size.

\n\n
\n

Batch_size depends on your gpu graphics card, size of VRAM, Cache memory etc.

\n
\n\n

Please prefer this Another Stack Overflow Link

\n", + "system": "" + }, + { + "instruction": "How to extract bias weights in Keras sequential model?", + "input": "", + "output": "

get_weights() for a Dense layer returns a list of two elements, the first element contains the weights, and the second element contains the biases. So you can simply do:

\n\n
weights = model.layers[0].get_weights()[0]\nbiases = model.layers[0].get_weights()[1]\n
\n\n

Note that weights and biases are already numpy arrays.

\n", + "system": "" + }, + { + "instruction": "TensorFlow - Pad unknown size tensor to a specific size?", + "input": "", + "output": "

Yes. There is. Provided you do not need to change the rank of the tensor, it's very simple.

\n

tf.pad() accepts regular python lists with tensors. The format of the padding is a list of pairs of how much to pad on each side of that dimension.

\n

e.g.

\n
t = tf.constant([[1, 2], [3, 4]])\npaddings = [[0, 0], [0, 4-tf.shape(t)[0]]]\nout = tf.pad(t, paddings, 'CONSTANT', constant_values=-1)\nsess.run(out)\n# gives: \n# array([[ 1,  2, -1, -1],\n#       [ 3,  4, -1, -1]], dtype=int32)\n
\n
\n

If you want to generalise this to a useful function, you could do something like:

\n
def pad_up_to(t, max_in_dims, constant_values):\n    diff = max_in_dims - tf.shape(t)\n    paddings = tf.pad(diff[:, None], [[0, 0], [1, 0]])\n    return tf.pad(t, paddings, 'CONSTANT', constant_values=constant_values)\n# (note: see edits for the solution referred to by other answers on this question)\n
\n

where max_in_dims is essentially the desired shape of the output. Note: this function will fail if you provide a shape that is strictly smaller than t in any dimension.

\n

You can use it like:

\n
t = tf.constant([[1, 2], [3, 4]]) # shape = [2, 2]\nt_padded = pad_up_to(t, [2, 4], -1) # shape = [2, 4], padded with -1s\n
\n

or

\n
t = tf.placeholder(tf.float32, [None, None]) # shape = [?, ?]\nt_padded = pad_up_to(t, [5,5], -1) # shape = [5, 5], padded with -1s\nt_np = np.random.uniform(0, 1, [3,4]) # shape = [3,4], no padding\nt_padded_out = sess.run(t_padded, {t: t_np})\nt_np2 = np.random.uniform(0, 1, [2,1]) # shape = [2,1], no padding\nt_padded_out2 = sess.run(t_padded, {t: t_np2})\n
\n

Although the dimension sizes are calculated dynamically, the number of dimensions is not, so make sure that max_in_dims has the same number of elements as t.shape.

\n", + "system": "" + }, + { + "instruction": "Distributed tensorflow: the difference between In-graph replication and Between-graph replication", + "input": "", + "output": "

First of all, for some historical context, \"in-graph replication\" is the first approach that we tried in TensorFlow, and it did not achieve the performance that many users required, so the more complicated \"between-graph\" approach is the current recommended way to perform distributed training. Higher-level libraries such as tf.learn use the \"between-graph\" approach for distributed training.

\n\n

To answer your specific questions:

\n\n
    \n
  1. \n

    Does this mean there are multiple tf.Graphs in the between-graph\n replication approach? If yes, where are the corresponding codes in the provided examples?

    \n
    \n\n

    Yes. The typical between-graph replication setup will use a separate TensorFlow process for each worker replica, and each of this will build a separate tf.Graph for the model. Usually each process uses the global default graph (accessible through tf.get_default_graph()) and it is not created explicitly.

    \n\n

    (In principle, you could use a single TensorFlow process with the same tf.Graph and multiple tf.Session objects that share the same underlying graph, as long as you configured the tf.ConfigProto.device_filters option for each session differently, but this is an uncommon setup.)

  2. \n
  3. \n

    While there is already a between-graph replication example in above link, could anyone provide an in-graph replication implementation (pseudocode is fine) and highlight its main differences from between-graph replication?

    \n
    \n\n

    For historical reasons, there are not many examples of in-graph replication (Yaroslav's gist is one exception). A program using in-graph replication will typically include a loop that creates the same graph structure for each worker (e.g. the loop on line 74 of the gist), and use variable sharing between the workers.

    \n\n

    The one place where in-graph replication persists is for using multiple devices in a single process (e.g. multiple GPUs). The CIFAR-10 example model for multiple GPUs is an example of this pattern (see the loop over GPU devices here).

  4. \n
\n\n

(In my opinion, the inconsistency between how multiple workers and multiple devices in a single worker are treated is unfortunate. In-graph replication is simpler to understand than between-graph replication, because it doesn't rely on implicit sharing between the replicas. Higher-level libraries, such as tf.learn and TF-Slim, hide some of these issues, and offer hope that we can offer a better replication scheme in the future.)

\n\n
    \n
  1. \n

    Why do we say each client builds a similar graph, but not the same graph?

    \n
    \n\n

    Because they aren't required to be identical (and there is no integrity check that enforces this). In particular, each worker might create a graph with different explicit device assignments (\"/job:worker/task:0\", \"/job:worker/task:1\", etc.). The chief worker might create additional operations that are not created on (or used by) the non-chief workers. However, in most cases, the graphs are logically (i.e. modulo device assignments) the same.

    \n\n
    \n

    Shouldn't it be multiple copies of the compute-intensive part of the model, since we have multiple workers?

    \n
    \n\n

    Typically, each worker has a separate graph that contains a single copy of the compute-intensive part of the model. The graph for worker i does not contain the nodes for worker j (assuming i ≠ j). (An exception would be the case where you're using between-graph replication for distributed training, and in-graph replication for using multiple GPUs in each worker. In that case, the graph for a worker would typically contain N copies of the compute-intensive part of the graph, where N is the number of GPUs in that worker.)

  2. \n
  3. \n

    Does the example in Replicated training support training on multiple machines, each of which has multiple GPUs?

    \n
    \n\n

    The example code only covers training on multiple machines, and says nothing about how to train on multiple GPUs in each machine. However, the techniques compose easily. In this part of the example:

    \n\n
    # Build model...\nloss = ...\n
    \n\n

    ...you could add a loop over the GPUs in the local machine, to achieve distributed training multiple workers each with multiple GPUs.

  4. \n
\n", + "system": "" + }, + { + "instruction": "TensorFlow: how to log GPU memory (VRAM) utilization?", + "input": "", + "output": "

Update, can use TensorFlow ops to query allocator:

\n\n
# maximum across all sessions and .run calls so far\nsess.run(tf.contrib.memory_stats.MaxBytesInUse())\n# current usage\nsess.run(tf.contrib.memory_stats.BytesInUse())\n
\n\n

Also you can get detailed information about session.run call including all memory being allocations during run call by looking at RunMetadata. IE something like this

\n\n
run_metadata = tf.RunMetadata()\nsess.run(c, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE, output_partition_graphs=True), run_metadata=run_metadata)\n
\n\n

Here's an end-to-end example -- take column vector, row vector and add them to get a matrix of additions:

\n\n
import tensorflow as tf\n\nno_opt = tf.OptimizerOptions(opt_level=tf.OptimizerOptions.L0,\n                             do_common_subexpression_elimination=False,\n                             do_function_inlining=False,\n                             do_constant_folding=False)\nconfig = tf.ConfigProto(graph_options=tf.GraphOptions(optimizer_options=no_opt),\n                        log_device_placement=True, allow_soft_placement=False,\n                        device_count={\"CPU\": 3},\n                        inter_op_parallelism_threads=3,\n                        intra_op_parallelism_threads=1)\nsess = tf.Session(config=config)\n\nwith tf.device(\"cpu:0\"):\n    a = tf.ones((13, 1))\nwith tf.device(\"cpu:1\"):\n    b = tf.ones((1, 13))\nwith tf.device(\"cpu:2\"):\n    c = a+b\n\nsess = tf.Session(config=config)\nrun_metadata = tf.RunMetadata()\nsess.run(c, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE, output_partition_graphs=True), run_metadata=run_metadata)\nwith open(\"/tmp/run2.txt\", \"w\") as out:\n  out.write(str(run_metadata))\n
\n\n

If you open run.txt you'll see messages like this:

\n\n
  node_name: \"ones\"\n\n      allocation_description {\n        requested_bytes: 52\n        allocator_name: \"cpu\"\n        ptr: 4322108320\n      }\n  ....\n\n  node_name: \"ones_1\"\n\n      allocation_description {\n        requested_bytes: 52\n        allocator_name: \"cpu\"\n        ptr: 4322092992\n      }\n  ...\n  node_name: \"add\"\n      allocation_description {\n        requested_bytes: 676\n        allocator_name: \"cpu\"\n        ptr: 4492163840\n
\n\n

So here you can see that a and b allocated 52 bytes each (13*4), and the result allocated 676 bytes.

\n", + "system": "" + }, + { + "instruction": "tf.SequenceExample with multidimensional arrays", + "input": "", + "output": "

I had the same problem. I think that it is entirely solveable, but you have to decide on the output format, and then figure out how you're going to use it.

\n\n

First what is your error?

\n\n

The error message is telling you that what you are trying to read doesn't fit into the feature size that you specified. So where did you specify it? Right here:

\n\n
sequence_features = {\n    \"input_characters\": tf.FixedLenSequenceFeature([], dtype=tf.int64),\n    \"output_characters\": tf.FixedLenSequenceFeature([], dtype=tf.int64)\n}\n
\n\n

This says \"my input_characters is a sequence of single values\", but this is not true; what you have is a sequence of sequences of single values and hence an error.

\n\n

Second what can you do?

\n\n

If you instead use:

\n\n
a = [[1,2,3], [2,3,1], [3,2,1]] \nsequence_features = {\n    \"input_characters\": tf.FixedLenSequenceFeature([3], dtype=tf.int64),\n    \"output_characters\": tf.FixedLenSequenceFeature([3], dtype=tf.int64)\n}\n
\n\n

You will not have an error with your code because you have specified that each element of the top level sequence is 3 elements long.

\n\n

Alternatively, if you do not have fixed length sequences, then you're going to have to use a different type of feature.

\n\n
sequence_features = {\n    \"input_characters\": tf.VarLenFeature(tf.int64),\n    \"output_characters\": tf.VarLenFeature(tf.int64)\n}\n
\n\n

The VarLenFeature tells it that the length is unknown before reading. Unfortunately this means that your input_characters can no longer be read as a dense vector in one step. Instead, it will be a SparseTensor by default. You can turn this into a dense tensor with tf.sparse_tensor_to_dense eg:

\n\n
input_densified = tf.sparse_tensor_to_dense(sequence_parsed['input_characters'])\n
\n\n

As mentioned in the article that you've been looking at, if your data does not always have the same length you will have to have a \"not_really_a_word\" word in your vocabulary, which you use as the default index. e.g. let's say you have index 0 mapping to the \"not_really_a_word\" word, then using your

\n\n
a = [[1,2,3],  [2,3],  [3,2,1]]\n
\n\n

python list will end up being a

\n\n
array((1,2,3),  (2,3,0),  (3,2,1))\n
\n\n

tensor.

\n\n

Be warned; I'm not certain that back-propagation \"just works\" for SparseTensors, like it does for dense tensors. The wildml article talks about padding 0s per sequence masking the loss for the \"not_actually_a_word\" word (see: \"SIDE NOTE: BE CAREFUL WITH 0\u2019S IN YOUR VOCABULARY/CLASSES\" in their article). This seems to suggest that the first method will be easier to implement.

\n\n

Note that this is different to the case described here where each example is a sequence of sequences. To my understanding, the reason this kind of method is not well supported is because it is an abuse of the case that this is meant to support; loading fixed-size embeddings directly.

\n\n
\n\n

I will assume that the very next thing you want to do is to turn those numbers into word embeddings. You can turn a list of indices into a list of embeddings with tf.nn.embedding_lookup

\n", + "system": "" + }, + { + "instruction": "TensorFlow: AttributeError: 'Tensor' object has no attribute 'shape'", + "input": "", + "output": "

UPDATE: Since TensorFlow 1.0, tf.Tensor now has a tf.Tensor.shape property, which returns the same value as tf.Tensor.get_shape().

\n\n
\n\n

Indeed, in versions prior to TensorFlow 1.0 tf.Tensor doesn't have a .shape property. You should use the Tensor.get_shape() method instead:

\n\n
train_data = tf.reshape(train_data, [400, 1])\nprint \"train_data.shape: \" + str(train_data.get_shape())\n
\n\n

Note that in general you might not be able to get the actual shape of the result of a TensorFlow operation. In some cases, the shape will be a computed value that depends on running the computation to find its value; and it may even vary from one run to the next (e.g. the shape of tf.unique()). In that case, the result of get_shape() for some dimensions may be None (or \"?\").

\n", + "system": "" + }, + { + "instruction": "Limit Tensorflow CPU and Memory usage", + "input": "", + "output": "

This will create a session that runs one op at a time, and only one thread per op

\n\n
sess = tf.Session(config=\n    tf.ConfigProto(inter_op_parallelism_threads=1,\n                   intra_op_parallelism_threads=1))\n
\n\n

Not sure about limiting memory, it seems to be allocated on demand, I've had TensorFlow freeze my machine when my network wanted 100GB of RAM, so my solution was to make networks that need less RAM

\n", + "system": "" + }, + { + "instruction": "speed benchmark for testing tensorflow install", + "input": "", + "output": "

Try tensorflow/tensorflow/models/image/mnist/convolutional.py, that'll print per-step timing.

\n\n

On Tesla K40c that should get about 16 ms per step, while about 120 ms for CPU-only on my 3 year old machine

\n\n
\n\n

Edit: This moved to the models repositories: https://github.com/tensorflow/models/blob/master/tutorials/image/mnist/convolutional.py.

\n\n

The convolutional.py file is now at models/tutorials/image/mnist/convolutional.py

\n", + "system": "" + }, + { + "instruction": "Dynamic size for tf.zeros() (for use with placeholders with None dimensions)", + "input": "", + "output": "

The recommended way to make a zero tensor with the same shape as another tensor is to use the tf.zeros_like() op:

\n\n
x = tf.placeholder(tf.float32, shape=[None, 4])\ny = tf.zeros_like(x)\n
\n\n

The resulting tensor y appears to have the shape [None, None] according to Tensor.get_shape(), but at runtime it will expand to the same shape as x:

\n\n
print y.get_shape()\n# ==> TensorShape([Dimension(None), Dimension(None)])\n\nsess = tf.Session()\ny_result = sess.run(y, feed_dict={x: np.random.rand(4, 4)})\n\nprint y_result.shape\n# ==> (4, 4)\n
\n\n

The [None, None] static shape is returned because shape inference hasn't been specialized for tf.zeros_like(). I've filed a GitHub issue for that and it should be fixed soon.

\n\n
\n\n

EDIT: In your comment, you asked how to deal with the case where the zero tensor had a shape based on, but different from the original tensor. This is also possible, using tf.shape() and tf.stack() to build the dimensions, and tf.fill() to produce the zero tensor:

\n\n
x = tf.placeholder(tf.float32, shape=[None, 4])\n\n# Use tf.shape() to get the runtime size of `x` in the 0th dimension.\nzeros_dims = tf.stack([tf.shape(x)[0], 7])\n\ny = tf.fill(zeros_dims, 0.0)\n\nsess = tf.Session()\ny_result = sess.run(y, feed_dict={x: np.random.rand(4, 4)})\nprint y_result.shape\n# ==> (4, 7)\n
\n", + "system": "" + }, + { + "instruction": "Extract features using pre-trained (Tensorflow) CNN", + "input": "", + "output": "

The TensorFlow team recently released a deep CNN trained on the ImageNet dataset. You can download the script that fetches the data (including the model graph and the trained weights) from here. The associated Image Recognition tutorial has more details about the model.

\n\n

While the current model isn't specifically packaged to be used in a subsequent training step, you could explore modifying the script to reuse parts of the model and the trained weights in your own network.

\n", + "system": "" + }, + { + "instruction": "TensorFlow random_shuffle_queue is closed and has insufficient elements", + "input": "", + "output": "

I had a similar problem. Digging around the web, it turned out that if you use some num_epochs argument, you have to initialize all the local variables, so your code should end up looking like:

\n\n
with tf.Session() as sess:\n    sess.run(tf.local_variables_initializer())\n    sess.run(tf.global_variables_initializer())\n    coord = tf.train.Coordinator()\n    threads = tf.train.start_queue_runners(coord=coord)\n\n    # do your stuff here\n\n    coord.request_stop()\n    coord.join(threads)\n
\n\n

If you post some more code, maybe I could take a deeper look into it. In the meantime, HTH.

\n", + "system": "" + }, + { + "instruction": "How can I change the shape of a variable in TensorFlow?", + "input": "", + "output": "

The tf.Variable class is the recommended way to create variables, but it restricts your ability to change the shape of the variable once it has been created.

\n\n

If you need to change the shape of a variable, you can do the following (e.g. for a 32-bit floating point tensor):

\n\n
var = tf.Variable(tf.placeholder(tf.float32))\n# ...\nnew_value = ...  # Tensor or numpy array.\nchange_shape_op = tf.assign(var, new_value, validate_shape=False)\n# ...\nsess.run(change_shape_op)  # Changes the shape of `var` to new_value's shape.\n
\n\n

Note that this feature is not in the documented public API, so it is subject to change. If you do find yourself needing to use this feature, let us know, and we can investigate a way to support it moving forward.

\n", + "system": "" + }, + { + "instruction": "The minimum required Cuda capability is 3.5", + "input": "", + "output": "

There is a section in the official installation page that guides you to enable Cuda 3, but you need to build Tensorflow from source.

\n\n
$ TF_UNOFFICIAL_SETTING=1 ./configure\n\n# Same as the official settings above\n\nWARNING: You are configuring unofficial settings in TensorFlow. Because some\nexternal libraries are not backward compatible, these settings are largely\nuntested and unsupported.\n\nPlease specify a list of comma-separated Cuda compute capabilities you want to\nbuild with. You can find the compute capability of your device at:\nhttps://developer.nvidia.com/cuda-gpus.\nPlease note that each additional compute capability significantly increases\nyour build time and binary size. [Default is: \"3.5,5.2\"]: 3.0\n\nSetting up Cuda include\nSetting up Cuda lib64\nSetting up Cuda bin\nSetting up Cuda nvvm\nConfiguration finished\n
\n", + "system": "" + }, + { + "instruction": "How to install Python 3.8 along with Python 3.9 in Arch Linux?", + "input": "", + "output": "

Go for package python38 in AUR, if you have an AUR helper like yay just use yay -S python38. Otherwise, just download the PKGBUILD and install manually with makepkg.

\n

You can also update python with pacman -Syu (which is now python3.9). Then the two shall live together, inside /usr/bin/python3.x.

\n

Use virtual environment to manage them if you like, virtualenv --python=/usr/bin/python3.x yourenvname.

\n", + "system": "" + }, + { + "instruction": "Use GPU with opencv-python", + "input": "", + "output": "

The problem here is that version of opencv distributed with your system (Windows in this case) was not compiled with Cuda support. Therefore, you cannot use any cuda related function with this build.

\n

If you want to have an opencv with cuda support, you will have to either compile it yourself (which may be tedious on windows) or find a prebuilt one somewhere. In case you want to go for the 1st solution, here is a link that may help you with the process: https://programming.vip/docs/compile-opencv-with-cuda-support-on-windows-10.html. Keep in mind that this will require you to install a bunch of SDK in the process.

\n", + "system": "" + }, + { + "instruction": "using cuDNN kernel for LSTM", + "input": "", + "output": "

I ran into the same problem and fixed it by manually setting the options to use the cuDNN-compatible implementation as specified here.

\n

"Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the CuDNN kernel (see below for details), the layer will use a fast cuDNN implementation."

\n

The requirements to use the cuDNN implementation are:

\n
    \n
  1. activation == tanh
  2. \n
  3. recurrent_activation == sigmoid
  4. \n
  5. recurrent_dropout == 0
  6. \n
  7. unroll is False
  8. \n
  9. use_bias is True
  10. \n
  11. Inputs, if use masking, are strictly right-padded.
  12. \n
  13. Eager execution is enabled in the outermost\ncontext.
  14. \n
\n

In particular, I had to specify recurrent_activation == sigmoid. The version of Keras/TF I installed had a default of recurrent_activation == hard_sigmoid.

\n", + "system": "" + }, + { + "instruction": "Deploy python app to Heroku "Slug Size too large"", + "input": "", + "output": "

I have already answered this here.

\n

Turns out the Tensorflow 2.0 module is very large (more than 500MB, the limit for Heroku) because of its GPU support. Since Heroku doesn't support GPU, it doesn't make sense to install the module with GPU support.

\n

Solution:

\n

Simply replace tensorflow with tensorflow-cpu in your requirements.

\n

This worked for me, hope it works for you too!

\n", + "system": "" + }, + { + "instruction": "from_logits=True and from_logits=False get different training result for tf.losses.CategoricalCrossentropy for UNet", + "input": "", + "output": "

Pushing the \"softmax\" activation into the cross-entropy loss layer significantly simplifies the loss computation and makes it more numerically stable.
\nIt might be the case that in your example the numerical issues are significant enough to render the training process ineffective for the from_logits=False option.

\n\n

You can find a derivation of the cross entropy loss (a special case of \"info gain\" loss) in this post. This derivation illustrates the numerical issues that are averted when combining softmax with cross entropy loss.

\n", + "system": "" + }, + { + "instruction": "How to fix \u2018RuntimeError: The Session graph is empty. Add operations to the graph before calling run().\u201d", + "input": "", + "output": "

TF 2.0 supports eager execution which means you don't have to explicitly create a session and run the code in it. So the simplest solution would be:

\n\n
import tensorflow as tf\nprint(tf.__version__)\n\n# Build a dataflow graph.\nc = tf.constant([[1.0, 2.0], [3.0, 4.0]])\nd = tf.constant([[1.0, 1.0], [0.0, 1.0]])\ne = tf.matmul(c, d)\n\nprint(e)\n
\n\n

which outputs

\n\n
2.0.0-beta1\ntf.Tensor(\n[[1. 3.]\n [3. 7.]], shape=(2, 2), dtype=float32)\n
\n\n

But you can use the session if you want:

\n\n
import tensorflow as tf\nprint(tf.__version__)\n\n# Construct a `Session` to execute the graph.\nwith tf.compat.v1.Session() as sess:\n\n  # Build a dataflow graph.\n  c = tf.constant([[1.0, 2.0], [3.0, 4.0]])\n  d = tf.constant([[1.0, 1.0], [0.0, 1.0]])\n  e = tf.matmul(c, d)\n\n  # Execute the graph and store the value that `e` represents in `result`.\n  result = sess.run(e)\n  print(result)\n
\n\n

which gives

\n\n
2.0.0-beta1\n[[1. 3.]\n [3. 7.]]\n
\n", + "system": "" + }, + { + "instruction": "What has to be inside tf.distribute.Strategy.scope()?", + "input": "", + "output": "

According to my experiments, the only thing that needs to be declared inside is model creation. If you use use Keras .fit() instead of custom training then model.compile() has to be inside as well.

\n\n

You can do something like this:

\n\n
def create_model():\n    \"\"\" This can be outside of the scope\n    \"\"\"\n    ...\n    return model\n\nwith strategy.scope():\n    model = create_model()\n
\n\n

If you use tf.train.Checkpoint then make sure both its instantiation and the call of checkpoint.resume() are inside the scope.

\n", + "system": "" + }, + { + "instruction": "expected ndim=3, found ndim=2", + "input": "", + "output": "

LSTM layer expects inputs to have shape of (batch_size, timesteps, input_dim). In keras you need to pass (timesteps, input_dim) for input_shape argument. But you are setting input_shape (9,). This shape does not include timesteps dimension. The problem can be solved by adding extra dimension to input_shape for time dimension. E.g adding extra dimension with value 1 could be simple solution. For this you have to reshape input dataset( X Train) and Y shape. But this might be problematic because the time resolution is 1 and you are feeding length one sequence. With length one sequence as input, using LSTM does not seem the right option.

\n
x_train = x_train.reshape(-1, 1, 9)\nx_test  = x_test.reshape(-1, 1, 9)\ny_train = y_train.reshape(-1, 1, 5)\ny_test = y_test.reshape(-1, 1, 5)\n\nmodel = Sequential()\nmodel.add(LSTM(100, input_shape=(1, 9), return_sequences=True))\nmodel.add(LSTM(5, input_shape=(1, 9), return_sequences=True))\nmodel.compile(loss="mean_absolute_error", optimizer="adam", metrics= ['accuracy'])\n\nhistory = model.fit(X_train,y_train,epochs=100, validation_data=(X_test,y_test))\n
\n", + "system": "" + }, + { + "instruction": "Is there a way to stack two tensorflow datasets?", + "input": "", + "output": "

The tf.data.Dataset.concatenate() method is the closest analog of tf.stack() when working with datasets. If you have two datasets with the same structure (i.e. same types for each component, but possibly different shapes):

\n
dataset_1 = tf.data.Dataset.range(10, 20)\ndataset_2 = tf.data.Dataset.range(60, 70)\n
\n

then you can concatenate them as follows:

\n
combined_dataset = dataset_1.concatenate(dataset_2)\n
\n", + "system": "" + }, + { + "instruction": "How to install TensorFlow-gpu with cuda8.0?", + "input": "", + "output": "

You'll need to install the version 1.4.1 for CUDA-8 as

\n\n
pip install tensorflow-gpu==1.4.1\n
\n\n

The latest (version 1.5) is for CUDA-9

\n", + "system": "" + }, + { + "instruction": "Why training speed does not scale with the batch size?", + "input": "", + "output": "

It's often wrongly mentioned that batch learning is as fast or faster than on-line training. In fact, batch-learning is changing the weights once, the complete set of data (the batch) has been presented to the network. Therefore, the weight update frequency is rather slow. This explains why the processing speed in your measurements acts like you observed.

\n\n

To get a further understanding for the training techniques, have a look at the 2003 paper The general inefficiency of batch training for gradient descent learning. It deals with the comparison of batch and on-line learning.

\n\n

Edit:

\n\n

Regarding your comment:

\n\n

I don't think there happens a model or data parallelization on one single GPU. The GPU parallelizes the vector and matrix operations that are involved in the training algorithm, but the batch learning algorithm is still computed as follows:

\n\n
loop maxEpochs times\n  for each training item\n    compute weights and bias deltas for curr item\n    accumulate the deltas\n  end for\n  adjust weights and bias deltas using accumulated deltas\nend loop\n
\n\n

As you can see, although the weight adjustment is only applied once for the whole batch, the weight and bias deltas still have to be computed for every element in the batch. Therefore there is IMHO no large performance advantage of the batch learning algorithm compared to the on-line learning.

\n", + "system": "" + }, + { + "instruction": "Early stopping with tf.estimator, how?", + "input": "", + "output": "

Good news! tf.estimator now has early stopping support on master and it looks like it will be in 1.10.

\n\n
estimator = tf.estimator.Estimator(model_fn, model_dir)\n\nos.makedirs(estimator.eval_dir())  # TODO This should not be expected IMO.\n\nearly_stopping = tf.contrib.estimator.stop_if_no_decrease_hook(\n    estimator,\n    metric_name='loss',\n    max_steps_without_decrease=1000,\n    min_steps=100)\n\ntf.estimator.train_and_evaluate(\n    estimator,\n    train_spec=tf.estimator.TrainSpec(train_input_fn, hooks=[early_stopping]),\n    eval_spec=tf.estimator.EvalSpec(eval_input_fn))\n
\n", + "system": "" + }, + { + "instruction": "Attention Layer throwing TypeError: Permute layer does not support masking in Keras", + "input": "", + "output": "

I am one of the authors of the package.

\n

You should use the latest version.\nThe previous versions had some conceptual problems.

\n", + "system": "" + }, + { + "instruction": "How to optimize for inference a simple, saved TensorFlow 1.0.1 graph?", + "input": "", + "output": "

Here is the detailed guide on how to optimize for inference:

\n\n

The optimize_for_inference module takes a frozen binary GraphDef file as input and outputs the optimized Graph Def file which you can use for inference. And to get the frozen binary GraphDef file you need to use the module freeze_graph which takes a GraphDef proto, a SaverDef proto and a set of variables stored in a checkpoint file. The steps to achieve that is given below:

\n\n

1. Saving tensorflow graph

\n\n
 # make and save a simple graph\n G = tf.Graph()\n with G.as_default():\n   x = tf.placeholder(dtype=tf.float32, shape=(), name=\"x\")\n   a = tf.Variable(5.0, name=\"a\")\n   y = tf.add(a, x, name=\"y\")\n   saver = tf.train.Saver()\n\nwith tf.Session(graph=G) as sess:\n   sess.run(tf.global_variables_initializer())\n   out = sess.run(fetches=[y], feed_dict={x: 1.0})\n\n  # Save GraphDef\n  tf.train.write_graph(sess.graph_def,'.','graph.pb')\n  # Save checkpoint\n  saver.save(sess=sess, save_path=\"test_model\")\n
\n\n

2. Freeze graph

\n\n
python -m tensorflow.python.tools.freeze_graph --input_graph graph.pb --input_checkpoint test_model --output_graph graph_frozen.pb --output_node_names=y\n
\n\n

3. Optimize for inference

\n\n
python -m tensorflow.python.tools.optimize_for_inference --input graph_frozen.pb --output graph_optimized.pb --input_names=x --output_names=y\n
\n\n

4. Using Optimized graph

\n\n
with tf.gfile.GFile('graph_optimized.pb', 'rb') as f:\n   graph_def_optimized = tf.GraphDef()\n   graph_def_optimized.ParseFromString(f.read())\n\nG = tf.Graph()\n\nwith tf.Session(graph=G) as sess:\n    y, = tf.import_graph_def(graph_def_optimized, return_elements=['y:0'])\n    print('Operations in Optimized Graph:')\n    print([op.name for op in G.get_operations()])\n    x = G.get_tensor_by_name('import/x:0')\n    out = sess.run(y, feed_dict={x: 1.0})\n    print(out)\n\n#Output\n#Operations in Optimized Graph:\n#['import/x', 'import/a', 'import/y']\n#6.0\n
\n\n

5. For multiple output names

\n\n

If there are multiple output nodes, then specify : output_node_names = 'boxes, scores, classes' and import graph by,

\n\n
 boxes,scores,classes, = tf.import_graph_def(graph_def_optimized, return_elements=['boxes:0', 'scores:0', 'classes:0'])\n
\n", + "system": "" + }, + { + "instruction": "Loading folders of images in tensorflow", + "input": "", + "output": "

The tf.data API (tensorflow 1.4 onwards) is great for things like this. The pipeline will looks something like the following:

\n\n\n\n

There are a number of ways of creating your initial dataset (see here for a more in depth answer)

\n\n

TFRecords with Tensorflow Datasets

\n\n

Supporting tensorflow version 1.12 onwards, Tensorflow datasets provides a relatively straight-forward API for creating tfrecord datasets, and also handles data downloading, sharding, statistics generation and other functionality automatically.

\n\n

See e.g. this image classification dataset implementation. There's a lot of bookeeping stuff in there (download urls, citations etc), but the technical part boils down to specifying features and writing a _generate_examples function

\n\n
features = tfds.features.FeaturesDict({\n            \"image\": tfds.features.Image(shape=(_TILES_SIZE,) * 2 + (3,)),\n            \"label\": tfds.features.ClassLabel(\n                names=_CLASS_NAMES),\n            \"filename\": tfds.features.Text(),\n        })\n\n...\n\ndef _generate_examples(self, root_dir):\n  root_dir = os.path.join(root_dir, _TILES_SUBDIR)\n  for i, class_name in enumerate(_CLASS_NAMES):\n    class_dir = os.path.join(root_dir, _class_subdir(i, class_name))\n    fns = tf.io.gfile.listdir(class_dir)\n\n    for fn in sorted(fns):\n      image = _load_tif(os.path.join(class_dir, fn))\n      yield {\n          \"image\": image,\n          \"label\": class_name,\n          \"filename\": fn,\n      }\n
\n\n
\n\n

You can also generate the tfrecords using lower level operations.

\n\n

Load images via tf.data.Dataset.map and tf.py_func(tion)

\n\n

Alternatively you can load the image files from filenames inside tf.data.Dataset.map as below.

\n\n
image_paths, labels = load_base_data(...)\nepoch_size = len(image_paths)\nimage_paths = tf.convert_to_tensor(image_paths, dtype=tf.string)\nlabels = tf.convert_to_tensor(labels)\n\ndataset = tf.data.Dataset.from_tensor_slices((image_paths, labels))\n\nif mode == 'train':\n    dataset = dataset.repeat().shuffle(epoch_size)\n\n\ndef map_fn(path, label):\n    # path/label represent values for a single example\n    image = tf.image.decode_jpeg(tf.read_file(path))\n\n    # some mapping to constant size - be careful with distorting aspec ratios\n    image = tf.image.resize_images(out_shape)\n    # color normalization - just an example\n    image = tf.to_float(image) * (2. / 255) - 1\n    return image, label\n\n\n# num_parallel_calls > 1 induces intra-batch shuffling\ndataset = dataset.map(map_fn, num_parallel_calls=8)\ndataset = dataset.batch(batch_size)\n# try one of the following\ndataset = dataset.prefetch(1)\n# dataset = dataset.apply(\n#            tf.contrib.data.prefetch_to_device('/gpu:0'))\n\nimages, labels = dataset.make_one_shot_iterator().get_next()\n
\n\n

I've never worked in a distributed environment, but I've never noticed a performance hit from using this approach over tfrecords. If you need more custom loading functions, also check out tf.py_func.

\n\n

More general information here, and notes on performance here

\n", + "system": "" + }, + { + "instruction": "Unable to Install Tensorflow (MemoryError)", + "input": "", + "output": "

Try installing without caching: pip install --no-cache-dir tensorflow.

\n", + "system": "" + }, + { + "instruction": "Scalable, Efficient Hierarchical Softmax in Tensorflow?", + "input": "", + "output": "

You mention that you want GPU-class performance:

\n\n
\n

but now keeps everything on the CPU and slows things down quite a bit

\n
\n\n

and wish to use 300-unit hidden size and 10M-word dictionaries.

\n\n

This means that (assuming float32), you'll need 4 * 300 * 10M * 2 bytes = 24 GB just to store the parameters and the gradient for the output layer.

\n\n

Hierarchical Softmax (HSM) doesn't reduce the memory requirements - it just speeds up the training.

\n\n

Realistically, you'll need a lot more GPU memory, because you'll also need to store:

\n\n\n\n

Therefore, if you want to do all computation on GPUs, you'll have no choice but to distribute this layer across multiple high-memory GPUs.

\n\n

However, you now have another problem:

\n\n

To make this concrete, let's suppose you have a 2-level HSM with 3K classes, with 3K words per class (9M words in total). You distribute the 3K classes across 8 GPUs, so that each hosts 384 classes.

\n\n

What if all target words in a batch are from the same 384 classes, i.e. they belong to the same GPU? One GPU will be doing all the work, while the other 7 wait for it.

\n\n

The problem is that even if the target words in a batch belong to different GPUs, you'll still have the same performance as in the worst-case scenario, if you want to do this computation in TensorFlow (This is because TensorFlow is a \"specify-and-run\" framework -- the computational graph is the same for the best case and the worst case)

\n\n
\n

What is the best way to do this to both be scalable to large class counts and efficient?

\n
\n\n

The above inefficiency of model parallelism (each GPU must process the whole batch) suggests that one should try to keep everything in one place.

\n\n

Let us suppose that you are either implementing everything on the host, or on 1 humongous GPU.

\n\n
    \n
  1. If you are not modeling sequences, or if you are, but there is only one output for the whole sequence, then the memory overhead from copying the parameters, to which you referred, is negligible compared to the memory requirements described above:

    \n\n

    400 == batch size << number of classes == 3K

    \n\n

    In this case, you could simply use gather or embedding_lookup (Although the copying is inefficient)

  2. \n
  3. However, if you do model sequences of length, say, 100, with output at every time step, then the parameter copying becomes a big issue.

    \n\n

    In this case, I think you'll need to drop down to C++ / CUDA C and implement this whole layer and its gradient as a custom op.

  4. \n
\n", + "system": "" + }, + { + "instruction": "keras - cannot import name Conv2D", + "input": "", + "output": "

Try this: from keras.layers.convolutional import Conv2D

\n\n

Importing changed with the new keras. Are you sure you are not using keras >= 2?

\n\n
\n\n

NOTE:

\n\n

With tensorflow 2.0 keras is included. You can now import the layer with:

\n\n
from tensorflow.keras.layers import Conv2D\n
\n", + "system": "" + }, + { + "instruction": "Given a tensor flow model graph, how to find the input node and output node names", + "input": "", + "output": "

Try this:

\n\n

run python

\n\n
>>> import tensorflow as tf\n>>> gf = tf.GraphDef()\n>>> gf.ParseFromString(open('/your/path/to/graphname.pb','rb').read())\n
\n\n

and then

\n\n
>>> [n.name + '=>' +  n.op for n in gf.node if n.op in ( 'Softmax','Placeholder')]\n
\n\n

Then, you can get result similar to this:

\n\n
['Mul=>Placeholder', 'final_result=>Softmax']\n
\n\n

But I'm not sure it's the problem of node names regarding the error messages.\nI guess you provided wrong arguements when loading the graph file or your generated graph file is something wrong?

\n\n

Check this part:

\n\n
E/AndroidRuntime(16821): java.lang.IllegalArgumentException: Incompatible \nshapes: [1,224,224,3] vs. [32,1,1,2048]\n
\n\n

UPDATE:\n Sorry, \n if you're using (re)trained graph , then try this:

\n\n
[n.name + '=>' +  n.op for n in gf.node if n.op in ( 'Softmax','Mul')]\n
\n\n

It seems that (re)trained graph saves input/output op name as \"Mul\" and \"Softmax\", while optimized and/or quantized graph saves them as \"Placeholder\" and \"Softmax\".

\n\n

BTW, using retrained graph in mobile environment is not recommended according to Peter Warden's post: https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/ . It's better to use quantized or memmapped graph due to performance and file size issue, I couldn't find out how to load memmapped graph in android though...:(\n(no problem loading optimized / quantized graph in android)

\n", + "system": "" + }, + { + "instruction": "ValueError when executing softmax_cross_entropy_with_logits", + "input": "", + "output": "

Change

\n\n
tf.nn.softmax_cross_entropy_with_logits(prediction,y)\n
\n\n

to

\n\n
tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y)\n
\n", + "system": "" + }, + { + "instruction": "Replace nan values in tensorflow tensor", + "input": "", + "output": "

A combination of tf.where and tf.is_nan should work:

\n\n
import tensorflow as tf\nwith tf.Session():\n    has_nans = tf.constant([float('NaN'), 1.])\n    print(tf.where(tf.is_nan(has_nans), tf.zeros_like(has_nans), has_nans).eval())\n
\n\n

Prints (using TensorFlow 0.12.1):

\n\n
[ 0.  1.]\n
\n", + "system": "" + }, + { + "instruction": "How to run Keras on multiple cores?", + "input": "", + "output": "

Tensorflow automatically runs the computations on as many cores as are available on a single machine.

\n\n

If you have a distributed cluster, be sure you follow the instructions at https://www.tensorflow.org/how_tos/distributed/ to configure the cluster. (e.g. create the tf.ClusterSpec correctly, etc.)

\n\n

To help debug, you can use the log_device_placement configuration options on the session to have Tensorflow print out where the computations are actually placed. (Note: this works for both GPUs as well as distributed Tensorflow.)

\n\n
# Creates a session with log_device_placement set to True.\nsess = tf.Session(config=tf.ConfigProto(log_device_placement=True))\n
\n\n

Note that while Tensorflow's computation placement algorithm works fine for small computational graphs, you might be able to get better performance on large computational graphs by manually placing the computations in specific devices. (e.g. using with tf.device(...): blocks.)

\n", + "system": "" + }, + { + "instruction": "Tensorflow AttributeError: 'NoneType' object has no attribute 'TF_DeleteStatus'", + "input": "", + "output": "

You can run import gc; gc.collect() in the end of your code.

\n", + "system": "" + }, + { + "instruction": "Is Intel based graphic card compatible with tensorflow/GPU?", + "input": "", + "output": "

At the moment no. Only Nvidia GPUs and (intel/amd) CPU versions are available.

\n\n

They are working on an openCL compatible version of tensorflow that would be compatible and you can follow its progress here

\n", + "system": "" + }, + { + "instruction": "TensorFlow REST Frontend but not TensorFlow Serving", + "input": "", + "output": "

There are different ways to do this. Purely, using tensorflow is not very flexible, however relatively straightforward. The downside of this approach is that you have to rebuild the graph and initialize variables in the code where you restore the model. There is a way shown in tensorflow skflow/contrib learn which is more elegant, however this doesn't seem to be functional at the moment and the documentation is out of date.

\n\n

I put a short example together on github here that shows how you would named GET or POST parameters to a flask REST-deployed tensorflow model.

\n\n

The main code is then in a function that takes a dictionary based on the POST/GET data:

\n\n
@app.route('/model', methods=['GET', 'POST'])\n@parse_postget\ndef apply_model(d):\n    tf.reset_default_graph()\n    with tf.Session() as session:\n        n = 1\n        x = tf.placeholder(tf.float32, [n], name='x')\n        y = tf.placeholder(tf.float32, [n], name='y')\n        m = tf.Variable([1.0], name='m')\n        b = tf.Variable([1.0], name='b')\n        y = tf.add(tf.mul(m, x), b) # fit y_i = m * x_i + b\n        y_act = tf.placeholder(tf.float32, [n], name='y_')\n        error = tf.sqrt((y - y_act) * (y - y_act))\n        train_step = tf.train.AdamOptimizer(0.05).minimize(error)\n\n        feed_dict = {x: np.array([float(d['x_in'])]), y_act: np.array([float(d['y_star'])])}\n        saver = tf.train.Saver()\n        saver.restore(session, 'linear.chk')\n        y_i, _, _ = session.run([y, m, b], feed_dict)\n    return jsonify(output=float(y_i))\n
\n", + "system": "" + }, + { + "instruction": "How to permutate tranposition in tensorflow?", + "input": "", + "output": "

I think perm is permuting the dimensions. For example perm=[0,2,1] is short for dim_0 -> dim_0, dim_1 -> dim_2, dim_2 -> dim_1. So for a 2D tensor, perm=[1,0] is just matrix transpose. Does this answer your question?

\n", + "system": "" + }, + { + "instruction": "Convert Keras model to TensorFlow protobuf", + "input": "", + "output": "

In case you don't need to utilize a GPU in the environment you are deploying to, you could also use my library, called frugally-deep. It is available on GitHub and published under the MIT License: https://github.com/Dobiasd/frugally-deep

\n\n

frugally-deep allows running forward passes on already-trained Keras models directly in C++ without the need to link against TensorFlow or any other backend.

\n", + "system": "" + }, + { + "instruction": "Convert Keras model to TensorFlow protobuf", + "input": "", + "output": "

In case you don't need to utilize a GPU in the environment you are deploying to, you could also use my library, called frugally-deep. It is available on GitHub and published under the MIT License: https://github.com/Dobiasd/frugally-deep

\n\n

frugally-deep allows running forward passes on already-trained Keras models directly in C++ without the need to link against TensorFlow or any other backend.

\n", + "system": "" + }, + { + "instruction": "TypeError: Expected float32 passed to parameter 'y' of op 'Equal', got 'auto' of type 'str' instead", + "input": "", + "output": "

Try changing

\n
model.compile(optimizer='adam', loss=tf.keras.losses.MeanSquaredError)\n
\n

to

\n
model.compile(optimizer='adam', loss=tf.keras.losses.MeanSquaredError())\n
\n", + "system": "" + }, + { + "instruction": "using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function", + "input": "", + "output": "

I have stumble over this also hence i am leaving my solution to this problem to help anyone.

\n

There is a catch when you are in eager execution mode since tf upgraded to 2.x, if you are using keras API loss and metrics you should instantiate them in order to compile.
\nSee the example below:

\n
model.compile(optimizer="...", \n              loss=keras.losses.AnyLoss, \n              metrics=[keras.metrics.AnyMetric])\n
\n

Above code will give OperatorNotAllowedInGraphError. To overcome do this;

\n
my_loss = keras.losses.AnyLoss(lr, *args, **kwargs)\nmy_metric = keras.metrics.AnyMetric(*args, **kwargs)\n\nmodel.compile(optimizer,\n              loss = my_loss\n              metrics = [my_metric_1, my_metric_2...]\n
\n

That should do the trick

\n", + "system": "" + }, + { + "instruction": "Tensorflow CUDA - CUPTI error: CUPTI could not be loaded or symbol could not be found", + "input": "", + "output": "

Add this in path for Windows:

\n\n
C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.0\\extras\\CUPTI\\libx64\n
\n", + "system": "" + }, + { + "instruction": "module 'tensorflow' has no attribute 'GPUOptions'", + "input": "", + "output": "

Tensorflow 2.x has undergone major changes from 1.x.

\n

As per official communication,

\n
\n

tf.contrib will be removed from the core TensorFlow repository and build process. TensorFlow\u2019s contrib module has grown beyond what can be maintained and supported in a single repository. Larger projects are better maintained separately, while smaller extensions will graduate to the core TensorFlow code. A special interest group (SIG) has been formed to maintain and further develop some of the more important contrib projects going forward. Please engage with this RFC if you are interested in contributing.

\n
\n

If you want to use the tensorflow 1.x functions/methods, there is a compatibility module kept in tensorflow 2.x.

\n
tf.compat.v1.GPUOptions(per_process_gpu_memory_fraction=0.333)\n
\n", + "system": "" + }, + { + "instruction": "tensorflow: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`", + "input": "", + "output": "

This depreciation warning is due to the Dropout layer in tf.keras.layers.Dropout.
To avoid this warning, you need to clearly specify rate= in Dropout as: Dropout(rate=0.2).\n

Earlier it was keep_prob and it is now deprecated to rate i.e. rate = 1-keep_prob. \n
For more, you can check out this tensorflow documentation.

\n", + "system": "" + }, + { + "instruction": "Neural network for square (x^2) approximation", + "input": "", + "output": "

You are making two very basic mistakes:

\n\n\n\n

It is certainly understood that neural networks need to be of some complexity if they are to solve problems even as \"simple\" as x*x; and where they really shine is when fed with large training datasets.

\n\n

The methodology when trying to solve such function approximations is not to just list the (few possible) inputs and then fed to the model, along with the desired outputs; remember, NNs learn through examples, and not through symbolic reasoning. And the more examples the better. What we usually do in similar cases is to generate a large number of examples, which we subsequently feed to the model for training.

\n\n

Having said that, here is a rather simple demonstration of a 3-layer neural network in Keras for approximating the function x*x, using as input 10,000 random numbers generated in [-50, 50]:

\n\n
import numpy as np\nimport keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.optimizers import Adam\nfrom keras import regularizers\nimport matplotlib.pyplot as plt\n\nmodel = Sequential()\nmodel.add(Dense(8, activation='relu', kernel_regularizer=regularizers.l2(0.001), input_shape = (1,)))\nmodel.add(Dense(8, activation='relu', kernel_regularizer=regularizers.l2(0.001)))\nmodel.add(Dense(1))\n\nmodel.compile(optimizer=Adam(),loss='mse')\n\n# generate 10,000 random numbers in [-50, 50], along with their squares\nx = np.random.random((10000,1))*100-50\ny = x**2\n\n# fit the model, keeping 2,000 samples as validation set\nhist = model.fit(x,y,validation_split=0.2,\n             epochs= 15000,\n             batch_size=256)\n\n# check some predictions:\nprint(model.predict([4, -4, 11, 20, 8, -5]))\n# result:\n[[ 16.633354]\n [ 15.031291]\n [121.26833 ]\n [397.78638 ]\n [ 65.70035 ]\n [ 27.040245]]\n
\n\n

Well, not that bad! Remember that NNs are function approximators: we should expect them neither to exactly reproduce the functional relationship nor to \"know\" that the results for 4 and -4 should be identical.

\n\n

Let's generate some new random data in [-50,50] (remember, for all practical purposes, these are unseen data for the model) and plot them, along with the original ones, to get a more general picture:

\n\n
plt.figure(figsize=(14,5))\nplt.subplot(1,2,1)\np = np.random.random((1000,1))*100-50 # new random data in [-50, 50]\nplt.plot(p,model.predict(p), '.')\nplt.xlabel('x')\nplt.ylabel('prediction')\nplt.title('Predictions on NEW data in [-50,50]')\n\nplt.subplot(1,2,2)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.plot(x,y,'.')\nplt.title('Original data')\n
\n\n

Result:

\n\n

\"enter

\n\n

Well, it arguably does look like a good approximation indeed...

\n\n

You could also take a look at this thread for a sine approximation.

\n\n

The last thing to keep in mind is that, although we did get a decent approximation even with our relatively simple model, what we should not expect is extrapolation, i.e. good performance outside [-50, 50]; for details, see my answer in Is deep learning bad at fitting simple non linear functions outside training scope?

\n", + "system": "" + }, + { + "instruction": "Understanding COCO evaluation "maximum detections"", + "input": "", + "output": "

You can change the maxDets parameter and define a new summarize() instance method.

\n\n

Let's create a COCOeval object:

\n\n
cocoEval = COCOeval(cocoGt,cocoDt,annType)\ncocoEval.params.maxDets = [200]\ncocoEval.params.imgIds  = imgIdsDt\ncocoEval.evaluate()\ncocoEval.accumulate()\ncocoEval.summarize_2() # instead of calling cocoEval.summarize()\n
\n\n

Now, define summarize_2() method in cocoeval.py module in the following way:

\n\n
def summarize_2(self):\n    # Copy everything from `summarize` method here except\n    # the function `_summarizeDets()`.\n    def _summarizeDets():\n        stats = np.zeros((12,))\n        stats[0] = _summarize(1, maxDets=self.params.maxDets[0])\n        stats[1] = _summarize(1, iouThr=.5, maxDets=self.params.maxDets[0])\n        stats[2] = _summarize(1, iouThr=.75, maxDets=self.params.maxDets[0])\n        stats[3] = _summarize(1, areaRng='small', maxDets=self.params.maxDets[0])\n        stats[4] = _summarize(1, areaRng='medium', maxDets=self.params.maxDets[0])\n        stats[5] = _summarize(1, areaRng='large', maxDets=self.params.maxDets[0])\n        stats[6] = _summarize(0, maxDets=self.params.maxDets[0])\n        stats[9] = _summarize(0, areaRng='small', maxDets=self.params.maxDets[0])\n        stats[10] = _summarize(0, areaRng='medium', maxDets=self.params.maxDets[0])\n        stats[11] = _summarize(0, areaRng='large', maxDets=self.params.maxDets[0])\n        return stats\n    # Copy other things which are left from `summarize()` here.\n
\n\n

If you run the above method over your dataset, you will get an output similar to this:

\n\n
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=200 ] = 0.507\n Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=200 ] = 0.699\n Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=200 ] = 0.575\n Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=200 ] = 0.586\n Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=200 ] = 0.519\n Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=200 ] = 0.501\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=200 ] = 0.598\n Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=200 ] = 0.640\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=200 ] = 0.566\n Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=200 ] = 0.564\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow: ImportError: libcudnn.so.7: cannot open shared object file: No such file or directory", + "input": "", + "output": "

You are setting LD_LIBRARY_PATH in the wrong way, I would recommend to do it this way (which is kind of the standard):

\n\n
export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64:$LD_LIBRARY_PATH\nexport LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow install fails with "compiletime version 3.5 of module does not match runtime version 3.6"", + "input": "", + "output": "
\n

RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6

\n
\n\n

This is a known issue, which is got prioritized and likely to be fixed soon. Right now the workaround is to use python 3.5.

\n\n

UPDATE:

\n\n

The issue has been fixed in the nightly tensorflow builds: \"tf-nightly and tf-nightly-gpu now has a python3.6 binary built from scratch for Linux.\"

\n\n

I.e., the following command should work with python 3.6:

\n\n
# tf-nightly or tf-nightly-gpu\npip3 install tf-nightly\n
\n\n
\n

Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX

\n
\n\n

This warning comes from the fact that the default tensorflow distributions are compiled without CPU extensions support (more on this here). If you want to get a CPU optimized tensorflow package, your only option is to build it yourself. It's a bit tedious, but absolutely doable. The build will produce the wheel file, which you can install with just

\n\n
pip3 install /path/to/the/tensorflow.whl\n
\n\n

But if you just want to suppress the warning, this will do:

\n\n
import os\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'\n
\n", + "system": "" + }, + { + "instruction": "How to get a tensorflow op by name?", + "input": "", + "output": "

You can use the tf.Graph.get_operation_by_name() method to get a tf.Operation by name. For example, to get an operation called \"enqueue\" from the default graph:

\n\n
op = tf.get_default_graph().get_operation_by_name(\"enqueue\")\n
\n", + "system": "" + }, + { + "instruction": "What does `tf.strided_slice()` do?", + "input": "", + "output": "

I experimented a bit with this method, which gave me some insights, which I think might be of some use. let's say we have a tensor.

\n\n\n\n
a = np.array([[[1, 1.2, 1.3], [2, 2.2, 2.3], [7, 7.2, 7.3]],\n              [[3, 3.2, 3.3], [4, 4.2, 4.3], [8, 8.2, 8.3]],\n              [[5, 5.2, 5.3], [6, 6.2, 6.3], [9, 9.2, 9.3]]]) \n# a.shape = (3, 3, 3)\n
\n\n

strided_slice() requires 4 required arguments input_, begin, end, strides in which we are giving our a as input_ argument.\n As the case with tf.slice() method, the begin argument is zero-based and rest of args shape-based. However in the docs begin and end both are zero-based.

\n\n

The functionality of method is quite simple:
\nIt works like iterating over a loop, where begin is the location of element in the tensor from where the loop initiates and end is where it stops.

\n\n
tf.strided_slice(a, [0, 0, 0], [3, 3, 3], [1, 1, 1])\n\n# output =  the tensor itself\n\ntf.strided_slice(a, [0, 0, 0], [3, 3, 3], [2, 2, 2])\n\n# output = [[[ 1.   1.3]\n#            [ 7.   7.3]]\n#           [[ 5.   5.3]\n#            [ 9.   9.3]]]\n
\n\n

strides are like steps over which the loop iterates, here the [2,2,2] makes method to produce values starting at (0,0,0), (0,0,2), (0,2,0), (0,2,2), (2,0,0), (2,0,2) ..... in the a tensor.

\n\n
tf.strided_slice(input3, [1, 1, 0], [2, -1, 3], [1, 1, 1]) \n
\n\n

will produce output similar to tf.strided_slice(input3, [1, 1, 0], [2, 2, 3], [1, 1, 1]) as the tensora has shape = (3,3,3).

\n", + "system": "" + }, + { + "instruction": "TensorBoard Embedding Example?", + "input": "", + "output": "

I've used FastText's pre-trained word vectors with TensorBoard.

\n\n
import os\nimport tensorflow as tf\nimport numpy as np\nimport fasttext\nfrom tensorflow.contrib.tensorboard.plugins import projector\n\n# load model\nword2vec = fasttext.load_model('wiki.en.bin')\n\n# create a list of vectors\nembedding = np.empty((len(word2vec.words), word2vec.dim), dtype=np.float32)\nfor i, word in enumerate(word2vec.words):\n    embedding[i] = word2vec[word]\n\n# setup a TensorFlow session\ntf.reset_default_graph()\nsess = tf.InteractiveSession()\nX = tf.Variable([0.0], name='embedding')\nplace = tf.placeholder(tf.float32, shape=embedding.shape)\nset_x = tf.assign(X, place, validate_shape=False)\nsess.run(tf.global_variables_initializer())\nsess.run(set_x, feed_dict={place: embedding})\n\n# write labels\nwith open('log/metadata.tsv', 'w') as f:\n    for word in word2vec.words:\n        f.write(word + '\\n')\n\n# create a TensorFlow summary writer\nsummary_writer = tf.summary.FileWriter('log', sess.graph)\nconfig = projector.ProjectorConfig()\nembedding_conf = config.embeddings.add()\nembedding_conf.tensor_name = 'embedding:0'\nembedding_conf.metadata_path = os.path.join('log', 'metadata.tsv')\nprojector.visualize_embeddings(summary_writer, config)\n\n# save the model\nsaver = tf.train.Saver()\nsaver.save(sess, os.path.join('log', \"model.ckpt\"))\n
\n\n

Then run this command in your terminal:

\n\n
tensorboard --logdir=log\n
\n", + "system": "" + }, + { + "instruction": "Non-smooth and non-differentiable customized loss function tensorflow", + "input": "", + "output": "

The problem is not with the loss being piece-wise or non-smooth. The problem is that we need a loss function that can send back a non-zero gradient to the network parameters (dloss/dparameter) when there is an error between the output and the expected output. This applies to almost any function used inside the model (e.g. loss functions, activation functions, attention functions).

\n\n

For example, Perceptrons use a unit step H(x) as an activation function (H(x) = 1 if x > 0 else 0). since the derivative of H(x) is always zero (undefined at x=0), No gradient coming from the loss will pass through it back to the weights (chain rule), so no weights before that function in the network can be updated using gradient descent. Based on that, gradient descent can't be used for perceptrons but can be used for conventional neurons that uses the sigmoid activation function (since the gradient is not zero for all x).

\n\n

For Relu, the derivative is 1 for x > 0 and 0 otherwise. while the derivative is undefined at x=0, we still can back-propagate the loss gradient through it when x>0. That's why it can be used.

\n\n

That is why we need a loss function that has a non-zero gradient. Functions like accuracy and F1 have zero gradients everywhere (or undefined at some values of x), so they can't be used, while functions like cross-entropy, L2 and L1 have non-zero gradients, so they can be used. (note that L1 \"absolute difference\" is piece-wise and not smooth at x=0 but still can be used)

\n\n

In case you must use a function that doesn't meet the above criteria, try reinforcement learning methods instead (e.g. Policy gradient).

\n", + "system": "" + }, + { + "instruction": "Where is gen_math_ops script in tensorflow?", + "input": "", + "output": "

It's automatically generated by tf_gen_op_wrapper_* rules here.

\n\n

Also you can use ?? in your IPython notebook to find location

\n\n

\"example

\n", + "system": "" + }, + { + "instruction": "No variable to save error in Tensorflow", + "input": "", + "output": "

The error here is quite subtle. In In[8] you create a tf.Graph called graph and set it as default for the with graph.as_default(): block. This means that all of the variables are created in graph, and if you print graph.all_variables() you should see a list of your variables.

\n\n

However, you exit the with block before creating (i) the tf.Session, and (ii) the tf.train.Saver. This means that the session and saver are created in a different graph (the global default tf.Graph that is used when you don't explicitly create one and set it as default), which doesn't contain any variables—or any nodes at all.

\n\n

There are at least two solutions:

\n\n
    \n
  1. As Yaroslav suggests, you can write your program without using the with graph.as_default(): block, which avoids the confusion with multiple graphs. However, this can lead to name collisions between different cells in your IPython notebook, which is awkward when using the tf.train.Saver, since it uses the name property of a tf.Variable as the key in the checkpoint file.

  2. \n
  3. You can create the saver inside the with graph.as_default(): block, and create the tf.Session with an explicit graph, as follows:

    \n\n
    with graph.as_default():\n    # [Variable and model creation goes here.]\n\n    saver = tf.train.Saver()  # Gets all variables in `graph`.\n\nwith tf.Session(graph=graph) as sess:\n    saver.restore(sess)\n    # Do some work with the model....\n
    \n\n

    Alternatively, you can create the tf.Session inside the with graph.as_default(): block, in which case it will use graph for all of its operations.

  4. \n
\n", + "system": "" + }, + { + "instruction": "How do I check Bazel version?", + "input": "", + "output": "

See Bazel users manual

\n\n

From the command line:

\n\n
$ bazel version \nBuild label: 0.1.1\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow for binary classification", + "input": "", + "output": "

The original MNIST example uses a one-hot encoding to represent the labels in the data: this means that if there are NLABELS = 10 classes (as in MNIST), the target output is [1 0 0 0 0 0 0 0 0 0] for class 0, [0 1 0 0 0 0 0 0 0 0] for class 1, etc. The tf.nn.softmax() operator converts the logits computed by tf.matmul(x, W) + b into a probability distribution across the different output classes, which is then compared to the fed-in value for y_.

\n

If NLABELS = 1, this acts as if there were only a single class, and the tf.nn.softmax() op would compute a probability of 1.0 for that class, leading to a cross-entropy of 0.0, since tf.log(1.0) is 0.0 for all of the examples.

\n

There are (at least) two approaches you could try for binary classification:

\n
    \n
  1. The simplest would be to set NLABELS = 2 for the two possible classes, and encode your training data as [1 0] for label 0 and [0 1] for label 1. This answer has a suggestion for how to do that.

    \n
  2. \n
  3. You could keep the labels as integers 0 and 1 and use tf.nn.sparse_softmax_cross_entropy_with_logits(), as suggested in this answer.

    \n
  4. \n
\n", + "system": "" + }, + { + "instruction": "Display image of graph in TensorFlow?", + "input": "", + "output": "

This is exactly what tensorboard was created for. You need to slightly modify your code to store the information about your graph.

\n\n
import tensorflow as tf\nC_1 = tf.constant(5.0)\nC_2 = tf.constant(1.0)\nC_3 = tf.constant(2.0)\n\ngolden_ratio = (tf.sqrt(C_1) + C_2)/C_3\n\nwith tf.Session() as sess:\n    writer = tf.summary.FileWriter('logs', sess.graph)\n    print sess.run(golden_ratio)\n    writer.close()\n
\n\n

This will create a logs folder with event files in your working directory. After this you should run tensorboard from your command line tensorboard --logdir=\"logs\" and navigate to the url it gives you (http://127.0.0.1:6006). In your browser go to GRAPHS tab and enjoy your graph.

\n\n

You will use TB a lot if you are going to do anything with TF. So it makes sense to learn about it more from official tutorials and from this video.

\n", + "system": "" + }, + { + "instruction": "The meaning of 'Start cannot spawn child process: No such file or directory' upon running Tensorflow", + "input": "", + "output": "

Execute

\n
$ export PATH="${PATH}:/usr/local/nvidia/bin:/usr/local/cuda/bin"\n
\n

before starting your cPython notebook kernel / interpreter.

\n", + "system": "" + }, + { + "instruction": "Colab: (0) UNIMPLEMENTED: DNN library is not found", + "input": "", + "output": "

It's worked for me (Colab)

\n
# Check libcudnn8 version\n!apt-cache policy libcudnn8\n\n# Install latest version\n!apt install --allow-change-held-packages libcudnn8=8.4.1.50-1+cuda11.6\n\n# Export env variables\n!export PATH=/usr/local/cuda-11.4/bin${PATH:+:${PATH}}\n!export LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64:$LD_LIBRARY_PATH\n!export LD_LIBRARY_PATH=/usr/local/cuda-11.4/include:$LD_LIBRARY_PATH\n!export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/extras/CUPTI/lib64\n\n# Install tensorflow\n!pip install tflite-model-maker==0.4.0\n!pip uninstall -y tensorflow && pip install -q tensorflow==2.9.1\n!pip install pycocotools==2.0.4\n!pip install opencv-python-headless==4.6.0.66\n
\n", + "system": "" + }, + { + "instruction": "Unable to (manually) load cifar10 dataset", + "input": "", + "output": "

I was having a similar CERTIFICATE_VERIFY_FAILED error downloading CIFAR-10. Putting this in my python file worked:

\n
import ssl\nssl._create_default_https_context = ssl._create_unverified_context\n
\n

Reference: https://programmerah.com/python-error-certificate-verify-failed-certificate-has-expired-40374/

\n", + "system": "" + }, + { + "instruction": "What is the proper use of Tensorflow dataset prefetch and cache options?", + "input": "", + "output": "

I found this great explanation for Andrew Ng from Stanford: https://cs230.stanford.edu/blog/datapipeline/#best-practices

\n

"When the GPU is working on forward / backward propagation on the current batch, we want the CPU to process the next batch of data so that it is immediately ready. As the most expensive part of the computer, we want the GPU to be fully used all the time during training. We call this consumer/producer overlap, where the consumer is the GPU and the producer is the CPU.

\n

With tf.data, you can do this with a simple call to dataset.prefetch(1) at the end of the pipeline (after batching). This will always prefetch one batch of data and make sure that there is always one ready.

\n

In some cases, it can be useful to prefetch more than one batch. For instance, if the duration of the preprocessing varies a lot, prefetching 10 batches would average out the processing time over 10 batches, instead of sometimes waiting for longer batches.

\n

To give a concrete example, suppose that 10% of the batches take 10s to compute, and 90% take 1s. If the GPU takes 2s to train on one batch, by prefetching multiple batches you make sure that we never wait for these rare longer batches."

\n

I'm not quite sure how to determine the processing time of each batch but that's the next step. If your batches are roughly taking the same amount of time to process then I believe prefetch(batch_size=1) should suffice as your GPU wouldn't be waiting for the CPU to finish processing a computationally expensive batch.

\n", + "system": "" + }, + { + "instruction": "How to add attention layer to a Bi-LSTM", + "input": "", + "output": "

This can be a possible custom solution with a custom layer that computes attention on the positional/temporal dimension

\n
from tensorflow.keras.layers import Layer\nfrom tensorflow.keras import backend as K\n\nclass Attention(Layer):\n    \n    def __init__(self, return_sequences=True):\n        self.return_sequences = return_sequences\n        super(Attention,self).__init__()\n        \n    def build(self, input_shape):\n        \n        self.W=self.add_weight(name="att_weight", shape=(input_shape[-1],1),\n                               initializer="normal")\n        self.b=self.add_weight(name="att_bias", shape=(input_shape[1],1),\n                               initializer="zeros")\n        \n        super(Attention,self).build(input_shape)\n        \n    def call(self, x):\n        \n        e = K.tanh(K.dot(x,self.W)+self.b)\n        a = K.softmax(e, axis=1)\n        output = x*a\n        \n        if self.return_sequences:\n            return output\n        \n        return K.sum(output, axis=1)\n
\n

it's build to receive 3D tensors and output 3D tensors (return_sequences=True) or 2D tensors (return_sequences=False). below a dummy example

\n
# dummy data creation\n\nmax_len = 100\nmax_words = 333\nemb_dim = 126\n\nn_sample = 5\nX = np.random.randint(0,max_words, (n_sample,max_len))\nY = np.random.randint(0,2, n_sample)\n
\n

with return_sequences=True

\n
model = Sequential()\nmodel.add(Embedding(max_words, emb_dim, input_length=max_len))\nmodel.add(Bidirectional(LSTM(32, return_sequences=True)))\nmodel.add(Attention(return_sequences=True)) # receive 3D and output 3D\nmodel.add(LSTM(32))\nmodel.add(Dense(1, activation='sigmoid'))\nmodel.summary()\n\nmodel.compile('adam', 'binary_crossentropy')\nmodel.fit(X,Y, epochs=3)\n
\n

with return_sequences=False

\n
model = Sequential()\nmodel.add(Embedding(max_words, emb_dim, input_length=max_len))\nmodel.add(Bidirectional(LSTM(32, return_sequences=True)))\nmodel.add(Attention(return_sequences=False)) # receive 3D and output 2D\nmodel.add(Dense(1, activation='sigmoid'))\nmodel.summary()\n\nmodel.compile('adam', 'binary_crossentropy')\nmodel.fit(X,Y, epochs=3)\n
\n

You can integrate it into your networks easily

\n

here the running notebook

\n", + "system": "" + }, + { + "instruction": "TypeError: '>' not supported between instances of 'NoneType' and 'float'", + "input": "", + "output": "

Tensorflow 2.0

\n\n
DESIRED_ACCURACY = 0.979\n\nclass myCallback(tf.keras.callbacks.Callback):\n    def on_epoch_end(self, epochs, logs={}) :\n        if(logs.get('acc') is not None and logs.get('acc') >= DESIRED_ACCURACY) :\n            print('\\nReached 99.9% accuracy so cancelling training!')\n            self.model.stop_training = True\n\ncallbacks = myCallback()\n
\n", + "system": "" + }, + { + "instruction": "What are symbolic tensors in TensorFlow and Keras?", + "input": "", + "output": "

According to blog.tensorflow.org, a symbolic tensor differs from other tensors in that they do not specifically hold values.

\n\n

Let's consider a simple example.

\n\n
>>> a = tf.Variable(5, name=\"a\")\n>>> b = tf.Variable(7, name=\"b\")\n>>> c = (b**2 - a**3)**5\n>>> print(c)\n
\n\n

The output is as follows:

\n\n
tf.Tensor(1759441920, shape=(), dtype=int32)\n
\n\n

For the above, the values are specifically defined in tf.Variable format, and the output is in Tensor format. However, the tensor must contain a value in order to be considered as such.

\n\n

Symbolic tensors are different in that no explicit values are required to define the tensor, and this has implications in terms of building neural networks with TensorFlow 2.0, which now uses Keras as the default API.

\n\n

Here is an example of a Sequential neural network that is used to build a classification model for predicting hotel cancellation incidences (full Jupyter Notebook here if interested):

\n\n
from tensorflow.keras import models\nfrom tensorflow.keras import layers\n\nmodel = models.Sequential()\nmodel.add(layers.Dense(8, activation='relu', input_shape=(4,)))\nmodel.add(layers.Dense(1, activation='sigmoid'))\n
\n\n

This is a symbolically defined model, as no values are explicitly being defined in the network. Rather, a framework is created for the input variables to be read by the network, and then generate predictions.

\n\n

In this regard, Keras has become quite popular given that it allows for building of graphs using symbolic tensors, while at the same time maintaining an imperative layout.

\n", + "system": "" + }, + { + "instruction": "How to fix "ResourceExhaustedError: OOM when allocating tensor"", + "input": "", + "output": "

OOM stands for "out of memory". Your GPU is running out of memory, so it can't allocate memory for this tensor. There are a few things you can do:

\n\n

There is more useful information about this error:

\n
OOM when allocating tensor with shape[800000,32,30,62]\n
\n

This is a weird shape. If you're working with images, you should normally have 3 or 1 channel. On top of that, it seems like you are passing your entire dataset at once; you should instead pass it in batches.

\n", + "system": "" + }, + { + "instruction": "ERROR: tensorboard 2.0.2 has requirement setuptools>=41.0.0, but you'll have setuptools 40.6.2 which is incompatible", + "input": "", + "output": "

I just did a pip install setuptools --upgrade

\n\n

then

\n\n

pip install tensorflow

\n", + "system": "" + }, + { + "instruction": "tf.data vs keras.utils.sequence performance", + "input": "", + "output": "

Both approaches overlap input data preprocessing with model training. keras.utils.sequence does this by running multiple Python processes, while tf.data does this by running multiple C++ threads.

\n\n

If your preprocessing is being done by a non-TensorFlow Python library such as PIL, keras.utils.sequence may work better for you since multiple processes are needed to avoid contention on Python's global interpreter lock.

\n\n

If you can express your preprocessing using TensorFlow operations, I would expect tf.data to give better performance.

\n\n

Some other things to consider:

\n\n\n", + "system": "" + }, + { + "instruction": "How exactly does LSTMCell from TensorFlow operates?", + "input": "", + "output": "

Tensorflow uses glorot_uniform() function to initialize the lstm kernel, which samples weights from a random uniform distribution. We need to fix a value for the kernel to get reproducible results:

\n\n
import tensorflow as tf\nimport numpy as np\n\nnp.random.seed(0)\ntimesteps = 7\nnum_input = 4\nx_val = np.random.normal(size = (1, timesteps, num_input))\n\nnum_units = 3\n\ndef glorot_uniform(shape):\n    limit = np.sqrt(6.0 / (shape[0] + shape[1]))\n    return np.random.uniform(low=-limit, high=limit, size=shape)\n\nkernel_init = glorot_uniform((num_input + num_units, 4 * num_units))\n
\n\n

My implementation of the LSTMCell (well, actually it's just slightly rewritten tensorflow's code):

\n\n
def sigmoid(x):\n    return 1. / (1 + np.exp(-x))\n\nclass LSTMCell():\n    \"\"\"Long short-term memory unit (LSTM) recurrent network cell.\n    \"\"\"\n    def __init__(self, num_units, initializer=glorot_uniform,\n               forget_bias=1.0, activation=np.tanh):\n        \"\"\"Initialize the parameters for an LSTM cell.\n        Args:\n          num_units: int, The number of units in the LSTM cell.\n          initializer: The initializer to use for the kernel matrix. Default: glorot_uniform\n          forget_bias: Biases of the forget gate are initialized by default to 1\n            in order to reduce the scale of forgetting at the beginning of\n            the training. \n          activation: Activation function of the inner states.  Default: np.tanh.\n        \"\"\"\n        # Inputs must be 2-dimensional.\n        self._num_units = num_units\n        self._forget_bias = forget_bias\n        self._activation = activation\n        self._initializer = initializer\n\n    def build(self, inputs_shape):\n        input_depth = inputs_shape[-1]\n        h_depth = self._num_units\n        self._kernel = self._initializer(shape=(input_depth + h_depth, 4 * self._num_units))\n        self._bias = np.zeros(shape=(4 * self._num_units))\n\n    def call(self, inputs, state):\n        \"\"\"Run one step of LSTM.\n        Args:\n          inputs: input numpy array, must be 2-D, `[batch, input_size]`.\n          state:  a tuple of numpy arrays, both `2-D`, with column sizes `c_state` and\n            `m_state`.\n        Returns:\n          A tuple containing:\n          - A `2-D, [batch, output_dim]`, numpy array representing the output of the\n            LSTM after reading `inputs` when previous state was `state`.\n            Here output_dim is equal to num_units.\n          - Numpy array(s) representing the new state of LSTM after reading `inputs` when\n            the previous state was `state`.  Same type and shape(s) as `state`.\n        \"\"\"\n        num_proj = self._num_units\n        (c_prev, m_prev) = state\n\n        input_size = inputs.shape[-1]\n\n        # i = input_gate, j = new_input, f = forget_gate, o = output_gate\n        lstm_matrix = np.hstack([inputs, m_prev]).dot(self._kernel)\n        lstm_matrix += self._bias\n\n        i, j, f, o = np.split(lstm_matrix, indices_or_sections=4, axis=0)\n        # Diagonal connections\n        c = (sigmoid(f + self._forget_bias) * c_prev + sigmoid(i) *\n               self._activation(j))\n\n        m = sigmoid(o) * self._activation(c)\n\n        new_state = (c, m)\n        return m, new_state\n\nX = x_val.reshape(x_val.shape[1:])\n\ncell = LSTMCell(num_units, initializer=lambda shape: kernel_init)\ncell.build(X.shape)\n\nstate = (np.zeros(num_units), np.zeros(num_units))\nfor i in range(timesteps):\n    x = X[i,:]\n    output, state = cell.call(x, state)\n    print(output)\n
\n\n

Produces output:

\n\n
[-0.21386017 -0.08401277 -0.25431477]\n[-0.22243588 -0.25817422 -0.1612211 ]\n[-0.2282134  -0.14207162 -0.35017249]\n[-0.23286737 -0.17129192 -0.2706512 ]\n[-0.11768674 -0.20717363 -0.13339118]\n[-0.0599215  -0.17756104 -0.2028935 ]\n[ 0.11437953 -0.19484555  0.05371994]\n
\n\n

While your Tensorflow code, if you replace the second line with

\n\n
lstm = tf.nn.rnn_cell.LSTMCell(num_units = num_units, initializer = tf.constant_initializer(kernel_init))\n
\n\n

returns:

\n\n
[[-0.2138602  -0.08401276 -0.25431478]]\n[[-0.22243595 -0.25817424 -0.16122109]]\n[[-0.22821338 -0.1420716  -0.35017252]]\n[[-0.23286738 -0.1712919  -0.27065122]]\n[[-0.1176867  -0.2071736  -0.13339119]]\n[[-0.05992149 -0.177561   -0.2028935 ]]\n[[ 0.11437953 -0.19484554  0.05371996]]\n
\n", + "system": "" + }, + { + "instruction": "How to install Keras with gpu support?", + "input": "", + "output": "

Adding to the answer below which is the correct answer in terms of recommending to use Anaconda package manager, but out of date in that there is now a keras-gpu package on Anaconda Cloud.

\n

So once you have Anaconda installed, you simply need to create a new environment where you want to install keras-gpu and execute the command:

\n

conda install -c anaconda keras-gpu

\n

This will install Keras along with both tensorflow and tensorflow-gpu libraries as the backend. (There is also no need to install separately the CUDA runtime and cudnn libraries as they are also included in the package - tested on Windows 10 and working).

\n", + "system": "" + }, + { + "instruction": "WARNING from Tensorflow when creating VGG16", + "input": "", + "output": "

It looks like there's an open git issue to clean this up in the keras code:

\n\n

https://github.com/tensorflow/minigo/issues/740

\n\n

You should be safe to ignore the warning, I don't believe you can change it without modifying the TF repo. You can disable warnings as mentioned here:

\n\n
tf.logging.set_verbosity(tf.logging.ERROR)\n
\n", + "system": "" + }, + { + "instruction": "CuDNNLSTM: Failed to call ThenRnnForward", + "input": "", + "output": "

Probably your are running out of memory on the gpu. Your network is very large with 11 million trainable parameters. Do you really need a 512*2 Output of your recurrent layer?

\n\n

Furthermore your embedding_dim is also quite large, while your vocabulary is quite small with 5k words. I guess your network is too complex for your problem. I would suggest to try an embedding size of 32 and a LSTM size of 32 as a start. If your accuracy is still bad, you can increase complexity.

\n\n
EMBEDDING_DIM = 32\nBidirectional(LSTM(32, return_sequences=False))(embedding)\n
\n", + "system": "" + }, + { + "instruction": "Resolving differences between Keras and scikit-learn for simple fully-connected neural network", + "input": "", + "output": "

Importing necessary libraries:

\n
import numpy as np\n\nfrom tensorflow. keras. models import Sequential\n\nfrom tensorflow. keras. layers import Dense\n\nfrom tensorflow. keras. utils import to_categorical\n\nfrom sklearn. datasets import load_iris\n\nfrom sklearn. cross_validation import train_test_split\n\nfrom sklearn. from sklearn.preprocessing import StandardScaler\n\nfrom sklearn. from sklearn.neural_network import MLPClassifier\n\nfrom sklearn. metrics import accuracy_score\n\nLoad the Iris dataset\n\ndata = load_iris()\n\nX = data.data\n\ny = data.target\n\nTrainset and Test set Partitioning\n\nX_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2/);;;random_state=42\n\nStandardize the data\n\nscaler = StandardScaler()\n\nX_train_scaled = scaler. fit_transform(X_train)\n\nX_test_scaled = scaler. transform(X_test)\nmodel_keras = Sequential() model_keras.add(Dense(10, input_dim=4, activation=\u2019relu\u2019)) model_keras.add(Dense(10, activation=\u2019relu\u2019)) model_keras.add(Dense(3, activation=\u2019softmax\u2019)) model_keras.compile(optimizer=\u2019adam\u2019, loss=\u2019categorical_crossentropy\u2019, metrics=[\u2018accuracy\u2019]) model_keras.fit(X_train_scaled, to_categorical(y_train), epochs=50, batch_size=5, verbose=0) y_pred_keras = model_keras.predict(X_test_scaled) y_pred_keras_classes = np.argmax(y_pred_keras, axis=1) accuracy_keras = accuracy_score(y_test, y_pred_keras_classes) model_sklearn = MLPClassifier(hidden_layer_sizes=(10, 10), max_iter=500, random_state=42) model_sklearn.fit(X_train_scaled, y_train) y_pred_sklearn = model_sklearn.predict(X_test_scaled) accuracy_sklearn = accuracy_score(y_test, y_pred_sklearn) print(\u201cKeras model achieved an accuracy of\u201d, accuracy_keras)\n
\n", + "system": "" + }, + { + "instruction": "Does TensorFlow 1.9 support Python 3.7", + "input": "", + "output": "

I was able to install Tensorflow 1.12.0 with Python 3.7 on MacOS, with the following command.

\n\n
sudo python3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.12.0-py3-none-any.whl\n
\n", + "system": "" + }, + { + "instruction": "Keras model.fit() with tf.dataset API + validation_data", + "input": "", + "output": "

I solved the problem by using fit_genertor. I found the solution here. I applied @Dat-Nguyen's solution.

\n\n

You need simply to create two iterators, one for training and one for validation and then create your own generator where you will extract batches from the dataset and provide the data in form of (batch_data, batch_labels) . Finally in model.fit_generator you will pass the train_generator and validation_generator.

\n", + "system": "" + }, + { + "instruction": "Warning: Please use alternatives such as official/mnist/dataset.py from tensorflow/models", + "input": "", + "output": "

tensorflow.examples.tutorials is now deprecated and it is recommended to use tensorflow.keras.datasets as follows:

\n\n
import tensorflow as tf\nmnist = tf.keras.datasets.mnist\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n
\n\n

https://www.tensorflow.org/api_docs/python/tf/keras/datasets/mnist/load_data

\n", + "system": "" + }, + { + "instruction": "How do I print inside the loss function during training in Keras?", + "input": "", + "output": "

The only thing you can do is not use python's print function, but for example, tensorflow's tf.Print function that is part of the computational graph. The documentation says the operation does nothing but each time it is evaluated it prints a message that you can specify.

\n\n

You just need to be careful to place it correctly in the graph, something like:

\n\n
def loss(y_true, y_pred):\n    d = y_true - y_pred\n    d = tf.Print(d, [d], \"Inside loss function\")\n    return tf.reduce_mean(tf.square(d))\n
\n\n

A better option to look inside what is going on internally is to use the tensorflow debugger.

\n", + "system": "" + }, + { + "instruction": "Keras Binary Classification - Sigmoid activation function", + "input": "", + "output": "

You can assign the threshold explicitly in compile() by using

\n
tf.keras.metrics.BinaryAccuracy(\n    name="binary_accuracy", dtype=None, threshold=0.5\n)\n
\n

like following:

\n
model.compile(optimizer='sgd',\n              loss='mse',\n              metrics=[tf.keras.metrics.BinaryAccuracy()])\n
\n", + "system": "" + }, + { + "instruction": "Generating MNIST numbers using LSTM-CGAN in TensorFlow", + "input": "", + "output": "

There are a few things you can do to improve your network architecture and training phase.

\n\n
    \n
  1. Remove the tf.nn.sigmoid(logit) from both the generator and discriminator. Return just the pred.
  2. \n
  3. Use a numerically stable function to calculate your loss functions and fix the loss functions:

    \n\n

    D_loss = -tf.reduce_mean(tf.log(D_real) + tf.log(1. - D_fake))\nG_loss = -tf.reduce_mean(tf.log(D_fake))

  4. \n
\n\n

should be:

\n\n
D_loss_real = tf.nn.sigmoid_cross_entropy_with_logits(\n              logits=D_real,\n              labels=tf.ones_like(D_real))\nD_loss_fake = tf.nn.sigmoid_cross_entropy_with_logits(\n              logits=D_fake,\n              labels=tf.zeros_like(D_fake))\n\nD_loss = -tf.reduce_mean(D_loss_real + D_loss_fake)\nG_loss = -tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(\n              logits=D_real,\n              labels=tf.ones_like(D_real)))\n
\n\n

Once you fixed the loss and used a numerically stable function, things will go better. Also, as a rule of thumb, if there's too much noise in the loss, reduce the learning rate (the default lr of ADAM is usually too high when training GANs).\nHope it helps

\n", + "system": "" + }, + { + "instruction": "Parallelism isn't reducing the time in dataset map", + "input": "", + "output": "

The problem here is that the only operation in the Dataset.map() function is a tf.py_func() op. This op calls back into the local Python interpreter to run a function in the same process. Increasing num_parallel_calls will increase the number of TensorFlow threads that attempt to call back into Python concurrently. However, Python has something called the \"Global Interpreter Lock\" that prevents more than one thread from executing code at once. As a result, all but one of these multiple parallel calls will be blocked waiting to acquire the Global Interpreter Lock, and there will be almost no parallel speedup (and perhaps even a slight slowdown).

\n\n

Your code example didn't include the definition of the squarer() function, but it might be possible to replace tf.py_func() with pure TensorFlow ops, which are implemented in C++, and can execute in parallel. For example—and just guessing by the name—you could replace it with an invocation of tf.square(x), and you might then enjoy some parallel speedup.

\n\n

Note however that if the amount of work in the function is small, like squaring a single integer, the speedup might not be very large. Parallel Dataset.map() is more useful for heavier operations, like parsing a TFRecord with tf.parse_single_example() or performing some image distortions as part of a data augmentation pipeline.

\n", + "system": "" + }, + { + "instruction": "Parallel threads with TensorFlow Dataset API and flat_map", + "input": "", + "output": "

To the best of my knowledge, at the moment flat_map does not offer parallelism options.\nGiven that the bulk of the computation is done in pre_processing_func, what you might use as a workaround is a parallel map call followed by some buffering, and then using a flat_map call with an identity lambda function that takes care of flattening the output.

\n\n\n\n

In code:

\n\n
NUM_THREADS = 5\nBUFFER_SIZE = 1000\n\ndef pre_processing_func(data_):\n    # data-augmentation here\n    # generate new samples starting from the sample `data_`\n    artificial_samples = generate_from_sample(data_)\n    return atificial_samples\n\ndataset_source = (tf.data.Dataset.from_tensor_slices(input_tensors).\n                  map(pre_processing_func, num_parallel_calls=NUM_THREADS).\n                  prefetch(BUFFER_SIZE).\n                  flat_map(lambda *x : tf.data.Dataset.from_tensor_slices(x)).\n                  shuffle(BUFFER_SIZE)) # my addition, probably necessary though\n
\n\n

Note (to myself and whoever will try to understand the pipeline):

\n\n

Since pre_processing_func generates an arbitrary number of new samples starting from the initial sample (organised in matrices of shape (?, 512)), the flat_map call is necessary to turn all the generated matrices into Datasets containing single samples (hence the tf.data.Dataset.from_tensor_slices(x) in the lambda) and then flatten all these datasets into one big Dataset containing individual samples.

\n\n

It's probably a good idea to .shuffle() that dataset, or generated samples will be packed together.

\n", + "system": "" + }, + { + "instruction": "How to implement Tensorflow batch normalization in LSTM", + "input": "", + "output": "

If you want to use batch norm for RNN (LSTM or GRU), you can check out this implementation , or read the full description from blog post.

\n\n

However, the layer-normalization has more advantage than batch norm in sequence data. Specifically, \"the effect of batch normalization is dependent on the mini-batch size and it is not obvious how to apply it to recurrent networks\" (from the paper Ba, et al. Layer normalization).

\n\n

For layer normalization, it normalizes the summed inputs within each layer. You can check out the implementation of layer-normalization for GRU cell:

\n", + "system": "" + }, + { + "instruction": "TensorFlow: How to handle void labeled data in image segmentation?", + "input": "", + "output": "

I'm not 100% familiar with TF. However, have you considered using the weights parameter of the loss?
\nLooking at tf.loses.sparse_softmax_cross_entropy it has a parameter weights

\n\n
\n

weights: Coefficients for the loss. This must be scalar or of same rank as labels

\n
\n\n

You can set weightof \"void\" pixels to zero, thus making the loss ignore them.

\n\n

You can also remove the reduction from tf.nn.sparse_softmax_cross_entropy_with_logits and use tf.losses.compute_weighted_loss to perform the weighting.

\n", + "system": "" + }, + { + "instruction": "How to deal with UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape", + "input": "", + "output": "

I managed to solve the issue by using tf.dynnamic_partition instead of tf.gather . I replaced the above code like this:

\n\n
# Flatten batch elements to rank-2 tensor where 1st max_length rows belong to first batch element and so forth\nall_timesteps = tf.reshape(raw_output, [-1, n_dim])  # (batch_size*max_length, n_dim)\n# Indices to last element of each sequence.\n# Index to first element is the sequence order number times max sequence length.\n# Index to last element is the index to first element plus sequence length.\nrow_inds = tf.range(0, batch_size) * max_length + (seq_len - 1)\n# Creating a vector of 0s and 1s that will specify what timesteps to choose.\npartitions = tf.reduce_sum(tf.one_hot(row_inds, tf.shape(all_timesteps)[0], dtype='int32'), 0)\n# Selecting the elements we want to choose.\nlast_timesteps = tf.dynamic_partition(all_timesteps, partitions, 2)  # (batch_size, n_dim)\nlast_timesteps = last_timesteps[1]\n
\n", + "system": "" + }, + { + "instruction": "why tensorflow just outputs killed", + "input": "", + "output": "

When I run your code I get the same behavior, after typing dmesg you'll see a trace like, which confirms what gdelab was hinting at:

\n\n
[38607.234089] python3 invoked oom-killer: gfp_mask=0x24280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=0, order=0, oom_score_adj=0\n[38607.234090] python3 cpuset=/ mems_allowed=0\n[38607.234094] CPU: 3 PID: 1420 Comm: python3 Tainted: G           O    4.9.0-3-amd64 #1 Debian 4.9.30-2+deb9u2\n[38607.234094] Hardware name: Dell Inc. XPS 15 9560/05FFDN, BIOS 1.2.4 03/29/2017\n[38607.234096]  0000000000000000 ffffffffa9f28414 ffffa50090317cf8 ffff940effa5f040\n[38607.234097]  ffffffffa9dfe050 0000000000000000 0000000000000000 0101ffffa9d82dd0\n[38607.234098]  e09c7db7f06d0ac2 00000000ffffffff 0000000000000000 0000000000000000\n[38607.234100] Call Trace:\n[38607.234104]  [<ffffffffa9f28414>] ? dump_stack+0x5c/0x78\n[38607.234106]  [<ffffffffa9dfe050>] ? dump_header+0x78/0x1fd\n[38607.234108]  [<ffffffffa9d8047a>] ? oom_kill_process+0x21a/0x3e0\n[38607.234109]  [<ffffffffa9d800fd>] ? oom_badness+0xed/0x170\n[38607.234110]  [<ffffffffa9d80911>] ? out_of_memory+0x111/0x470\n[38607.234111]  [<ffffffffa9d85b4f>] ? __alloc_pages_slowpath+0xb7f/0xbc0\n[38607.234112]  [<ffffffffa9d85d8e>] ? __alloc_pages_nodemask+0x1fe/0x260\n[38607.234113]  [<ffffffffa9dd7c3e>] ? alloc_pages_vma+0xae/0x260\n[38607.234115]  [<ffffffffa9db39ba>] ? handle_mm_fault+0x111a/0x1350\n[38607.234117]  [<ffffffffa9c5fd84>] ? __do_page_fault+0x2a4/0x510\n[38607.234118]  [<ffffffffaa207658>] ? page_fault+0x28/0x30\n...\n[38607.234158] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name\n...\n[38607.234332] [ 1396]  1000  1396  4810969  3464995    6959      21        0             0 python3\n[38607.234332] Out of memory: Kill process 1396 (python3) score 568 or sacrifice child\n[38607.234357] Killed process 1396 (python3) total-vm:19243876kB, anon-rss:13859980kB, file-rss:0kB, shmem-rss:0kB\n[38607.720757] oom_reaper: reaped process 1396 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB\n
\n\n

Which basically means python was starting too consume too much memory and the kernel decided to kill the process. If you add some prints in your code you'll see that mnist_classifier.train() is the function which is active. However some dumb tests (as removing the logging and lowering the steps, did not seem to help here).

\n", + "system": "" + }, + { + "instruction": "how to load and use a saved model on tensorflow?", + "input": "", + "output": "

What was missing was the signature

\n\n
# Saving\nbuilder = tf.saved_model.builder.SavedModelBuilder(export_dir)\nbuilder.add_meta_graph_and_variables(sess, [\"tag\"], signature_def_map= {\n        \"model\": tf.saved_model.signature_def_utils.predict_signature_def(\n            inputs= {\"x\": x},\n            outputs= {\"finalnode\": model})\n        })\nbuilder.save()\n\n# loading\nwith tf.Session(graph=tf.Graph()) as sess:\n    tf.saved_model.loader.load(sess, [\"tag\"], export_dir)\n    graph = tf.get_default_graph()\n    x = graph.get_tensor_by_name(\"x:0\")\n    model = graph.get_tensor_by_name(\"finalnode:0\")\n    print(sess.run(model, {x: [5, 6, 7, 8]}))\n
\n", + "system": "" + }, + { + "instruction": "Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution'", + "input": "", + "output": "

By default, Convolution2D (https://keras.io/layers/convolutional/) expects the input to be in the format (samples, rows, cols, channels), which is \"channels-last\". Your data seems to be in the format (samples, channels, rows, cols). You should be able to fix this using the optional keyword data_format = 'channels_first' when declaring the Convolution2D layer.

\n\n
model.add(Convolution2D(32, (3, 3), activation='relu', input_shape=(1,28,28), data_format='channels_first'))\n
\n", + "system": "" + }, + { + "instruction": "CNN Image Recognition with Regression Output on Tensorflow", + "input": "", + "output": "

Check out the Udacity self-driving-car models which take an input image from a dash cam and predict a steering angle (i.e. continuous scalar) to stay on the road...usually using a regression output after one or more fully connected layers on top of the CNN layers.

\n

https://github.com/udacity/self-driving-car/tree/master/steering-models/community-models

\n

Here is a typical model:

\n

https://github.com/udacity/self-driving-car/tree/master/steering-models/community-models/autumn

\n

...it uses tf.atan() or you can use tf.tanh() or just linear to get your final output y.

\n

Use MSE for your loss function.

\n

Here is another example in keras...

\n
model = models.Sequential()\nmodel.add(convolutional.Convolution2D(16, 3, 3, input_shape=(32, 128, 3), activation='relu'))\nmodel.add(pooling.MaxPooling2D(pool_size=(2, 2)))\nmodel.add(convolutional.Convolution2D(32, 3, 3, activation='relu'))\nmodel.add(pooling.MaxPooling2D(pool_size=(2, 2)))\nmodel.add(convolutional.Convolution2D(64, 3, 3, activation='relu'))\nmodel.add(pooling.MaxPooling2D(pool_size=(2, 2)))\nmodel.add(core.Flatten())\nmodel.add(core.Dense(500, activation='relu'))\nmodel.add(core.Dropout(.5))\nmodel.add(core.Dense(100, activation='relu'))\nmodel.add(core.Dropout(.25))\nmodel.add(core.Dense(20, activation='relu'))\nmodel.add(core.Dense(1))\nmodel.compile(optimizer=optimizers.Adam(lr=1e-04), loss='mean_squared_error')\n
\n

They key difference from the MNIST examples is that instead of funneling down to a N-dim vector of logits into softmax w/ cross entropy loss, for your regression output you take it down to a 1-dim vector w/ MSE loss. (you can also have a mix of multiple classification and regression outputs in the final layer...like in YOLO object detection)

\n", + "system": "" + }, + { + "instruction": "How to get weights in tf.layers.dense?", + "input": "", + "output": "

The weights are added as a variable named kernel, so you could use

\n\n
x = tf.dense(...)\nweights = tf.get_default_graph().get_tensor_by_name(\n  os.path.split(x.name)[0] + '/kernel:0')\n
\n\n

You can obviously replace tf.get_default_graph() by any other graph you are working in.  

\n", + "system": "" + }, + { + "instruction": "libcublas.so.8.0 error with tensorflow", + "input": "", + "output": "

You need to install Cuda 8.0 and configure the environment as below:

\n\n
export PATH=\"$PATH:/usr/local/cuda-8.0/bin\"\nexport LD_LIBRARY_PATH=\"/usr/local/cuda-8.0/lib64\"\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow TypeError: Value passed to parameter input has DataType uint8 not in list of allowed values: float16, float32", + "input": "", + "output": "

The image from your input pipeline is of type 'uint8', you need to type cast it to 'float32', You can do this after the image jpeg decoder:

\n\n
image = tf.image.decode_jpeg(...\nimage = tf.cast(image, tf.float32)\n
\n", + "system": "" + }, + { + "instruction": "Replace Validation Monitors with tf.train.SessionRunHook when using Estimators", + "input": "", + "output": "

There's an undocumented utility called monitors.replace_monitors_with_hooks() which converts monitors to hooks. The method accepts (i) a list which may contain both monitors and hooks and (ii) the Estimator for which the hooks will be used, and then returns a list of hooks by wrapping a SessionRunHook around each Monitor.

\n\n
from tensorflow.contrib.learn.python.learn import monitors as monitor_lib\n\nclf = tf.estimator.Estimator(...)\n\nlist_of_monitors_and_hooks = [tf.contrib.learn.monitors.ValidationMonitor(...)]\nhooks = monitor_lib.replace_monitors_with_hooks(list_of_monitors_and_hooks, clf)\n
\n\n

This isn't really a true solution to the problem of fully replacing the ValidationMonitor—we're just wrapping it up with a non-deprecated function instead. However, I can say this has worked for me so far in that it maintained all the functionality I need from the ValidationMonitor (i.e. evaluating every n steps, early stopping using a metric, etc.)

\n\n

One more thing—to use this hook you'll need to update from a tf.contrib.learn.Estimator (which only accepts monitors) to the more full-fledged and official tf.estimator.Estimator (which only accepts hooks). So, you should instantiate your classifier as a tf.estimator.DNNClassifier, and train using its method train() instead (which is just a re-naming of fit()):

\n\n
clf = tf.estimator.Estimator(...)\n\n...\n\nclf.train(\n    input_fn=...\n    ...\n    hooks=hooks)\n
\n", + "system": "" + }, + { + "instruction": "No Module Named '_pywrap_tensorflow_internal'", + "input": "", + "output": "

I came across the same issue today, please switch to cuDNN v5.1 Library for Windows instead as @mickdelaney suggested and then try to

\n\n
    \n
  1. Check environment settings of CUDA, normally all the settings of CUDA had been added to Windows environment

  2. \n
  3. Copy files in bin, lib and include of cuDNN to bin, lib and include of CUDA respectively. Normally the directory is C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA

  4. \n
\n\n

And then you can import tensorflow and run your code. Good luck!

\n", + "system": "" + }, + { + "instruction": "Keras - is it possible to view the weights and biases of models in Tensorboard", + "input": "", + "output": "

You can get the weights and biases per layer and for the entire model with .get_weights().

\n\n

For example if the first layer of your model is the dense layer for which you would like to have your weights and biases, you can get them with:

\n\n
weights, biases = model.layers[0].get_weights()\n
\n", + "system": "" + }, + { + "instruction": "Looping over a tensor", + "input": "", + "output": "

To loop over a tensor you could try tf.unstack

\n\n
\n

Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors.

\n
\n\n

So adding 1 to each tensor would look something like:

\n\n
import tensorflow as tf\nx = tf.placeholder(tf.float32, shape=(None, 10))\nx_unpacked = tf.unstack(x) # defaults to axis 0, returns a list of tensors\n\nprocessed = [] # this will be the list of processed tensors\nfor t in x_unpacked:\n    # do whatever\n    result_tensor = t + 1\n    processed.append(result_tensor)\n\noutput = tf.concat(processed, 0)\n\nwith tf.Session() as sess:\n    print(sess.run([output], feed_dict={x: np.zeros((5, 10))}))\n
\n\n

Obviously you can further unpack each tensor from the list to process it, down to single elements. To avoid lots of nested unpacking though, you could maybe try flattening x with tf.reshape(x, [-1]) first, and then loop over it like

\n\n
flattened_unpacked = tf.unstack(tf.reshape(x, [-1])\nfor elem in flattened_unpacked:\n    process(elem)\n
\n\n

In this case elem is a scalar.

\n", + "system": "" + }, + { + "instruction": "Keras error "You must feed a value for placeholder tensor 'bidirectional_1/keras_learning_phase' with dtype bool"", + "input": "", + "output": "

Try to import K and set learning phase before your model.

\n\n
from keras import backend as K\n\nK.set_learning_phase(1) #set learning phase\n
\n\n

From this issue

\n", + "system": "" + }, + { + "instruction": "Tensorflow equivalent to numpy.diff", + "input": "", + "output": "

Try this:

\n\n
def tf_diff_axis_0(a):\n    return a[1:]-a[:-1]\n\ndef tf_diff_axis_1(a):\n    return a[:,1:]-a[:,:-1]\n
\n\n

To check:

\n\n
import numpy as np\nimport tensorflow as tf\n\nx0=np.arange(5)+np.zeros((5,5))\nsess = tf.Session()\nnp.diff(x0, axis=0) == sess.run(tf_diff_axis_0(tf.constant(x0)))\nnp.diff(x0, axis=1) == sess.run(tf_diff_axis_1(tf.constant(x0)))\n
\n", + "system": "" + }, + { + "instruction": "How to set the input of a Keras layer with a Tensorflow tensor?", + "input": "", + "output": "

After you are done with pre-processing, You can add the tensor as input layer by calling tensor param of Input

\n\n

So in your case:

\n\n
tf_embedding_input = ...    # pre-processing output tensor\n\n# Keras model\nmodel = Sequential()\nmodel.add(Input(tensor=tf_embedding_input)) \nmodel.add(Embedding(max_features, 128, input_length=maxlen))\n
\n", + "system": "" + }, + { + "instruction": "TensorBoard Distributions and Histograms with Keras and fit_generator", + "input": "", + "output": "

There is no easy way to just plug it in with one line of code, you have to write your summaries by hand.

\n

The good news is that it's not difficult and you can use the TensorBoard callback code in Keras as a reference.\n(There is also a version 2 ready for TensorFlow 2.x.)

\n

Basically, write a function e.g. write_summaries(model) and call it whenever you want to write your summaries (e.g. just after your fit_generator())

\n

Inside your write_summaries(model) function use tf.summary, histogram_summary and other summary functions to log data you want to see on tensorboard.

\n

If you don't know exactly how to check official tutorial:\nand this great example of MNIST with summaries.

\n", + "system": "" + }, + { + "instruction": "Loss suddenly increases with Adam Optimizer in Tensorflow", + "input": "", + "output": "

My experience over the last months is the following:\nAdam is very easy to use because you don't have to play with initial learning rate very much and it almost always works. However, when coming to convergence Adam does not really sattle with a solution but jiggles around at higher iterations. While SGD gives an almost perfectly shaped loss plot and seems to converge much better in higher iterations.\nBut changing litte parts of the setup requires to adjust the SGD parameters or you will end up with NaNs... For experiments on architectures and general approaches I favor Adam, but if you want to get the best version of one chosen architecture you should use SGD and at least compare the solutions.

\n\n

I also noticed that a good initial SGD setup (learning rate, weight decay etc.) converges as fast as using Adam, at leas for my setup.\nHope this may help some of you!

\n\n

EDIT: Please note that the effects in my initial question are NOT normal even with Adam. Seems like I had a bug but I can't really remember the issue there.

\n", + "system": "" + }, + { + "instruction": "How do you make TensorFlow + Keras fast with a TFRecord dataset?", + "input": "", + "output": "

I don't use tfrecord dataset format so won't argue on the pros and cons, but I got interested in extending Keras to support the same.

\n\n

github.com/indraforyou/keras_tfrecord is the repository. Will briefly explain the main changes.

\n\n
\n

Dataset creation and loading

\n
\n\n

data_to_tfrecord and read_and_decode here takes care of creating tfrecord dataset and loading the same. Special care must be to implement the read_and_decode otherwise you will face cryptic errors during training.

\n\n
\n

Initialization and Keras model

\n
\n\n

Now both tf.train.shuffle_batch and Keras Input layer returns tensor. But the one returned by tf.train.shuffle_batch don't have metadata needed by Keras internally. As it turns out, any tensor can be easily turned into a tensor with keras metadata by calling Input layer with tensor param.

\n\n

So this takes care of initialization:

\n\n
x_train_, y_train_ = ktfr.read_and_decode('train.mnist.tfrecord', one_hot=True, n_class=nb_classes, is_train=True)\n\nx_train_batch, y_train_batch = K.tf.train.shuffle_batch([x_train_, y_train_],\n                                                batch_size=batch_size,\n                                                capacity=2000,\n                                                min_after_dequeue=1000,\n                                                num_threads=32) # set the number of threads here\n\nx_train_inp = Input(tensor=x_train_batch)\n
\n\n

Now with x_train_inp any keras model can be developed.

\n\n
\n

Training (simple)

\n
\n\n

Lets say train_out is the output tensor of your keras model. You can easily write a custom training loop on the lines of:

\n\n
loss = tf.reduce_mean(categorical_crossentropy(y_train_batch, train_out))\ntrain_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)\n\n\n# sess.run(tf.global_variables_initializer())\nsess.run(tf.initialize_all_variables())\n\nwith sess.as_default():\n    coord = tf.train.Coordinator()\n    threads = tf.train.start_queue_runners(sess=sess, coord=coord)\n\n    try:\n      step = 0\n      while not coord.should_stop():\n        start_time = time.time()\n\n        _, loss_value = sess.run([train_op, loss], feed_dict={K.learning_phase(): 0})\n\n        duration = time.time() - start_time\n\n        if step % 100 == 0:\n          print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value,\n                                                     duration))\n        step += 1\n    except tf.errors.OutOfRangeError:\n      print('Done training for %d epochs, %d steps.' % (FLAGS.num_epochs, step))\n    finally:\n      coord.request_stop()\n\n    coord.join(threads)\n    sess.close()\n
\n\n
\n

Training (keras style)

\n
\n\n

One of the features of keras that makes it so lucrative is its generalized training mechanism with the callback functions.

\n\n

But to support tfrecords type training there are several changes that are need in the fit function

\n\n\n\n

But all this can be easily supported by another flag parameter. What makes things messing are the keras features sample_weight and class_weight they are used to weigh each sample and weigh each class. For this in compile() keras creates placeholders (here) and placeholders are also implicitly created for the targets (here) which is not needed in our case the labels are already fed in by tfrecord readers. These placeholders needs to be fed in during session run which is unnecessary in our cae.

\n\n

So taking into account these changes, compile_tfrecord(here) and fit_tfrecord(here) are the extension of compile and fit and shares say 95% of the code.

\n\n

They can be used in the following way:

\n\n
import keras_tfrecord as ktfr\n\ntrain_model = Model(input=x_train_inp, output=train_out)\nktfr.compile_tfrecord(train_model, optimizer='rmsprop', loss='categorical_crossentropy', out_tensor_lst=[y_train_batch], metrics=['accuracy'])\n\ntrain_model.summary()\n\nktfr.fit_tfrecord(train_model, X_train.shape[0], batch_size, nb_epoch=3)\ntrain_model.save_weights('saved_wt.h5')\n
\n\n

You are welcome to improve on the code and pull requests.

\n", + "system": "" + }, + { + "instruction": "How does data normalization work in keras during prediction?", + "input": "", + "output": "

Yes - this is a really huge downside of Keras.ImageDataGenerator that you couldn't provide the standarization statistics on your own. But - there is an easy method on how to overcome this issue.

\n\n

Assuming that you have a function normalize(x) which is normalizing an image batch (remember that generator is not providing a simple image but an array of images - a batch with shape (nr_of_examples_in_batch, image_dims ..) you could make your own generator with normalization by using:

\n\n
def gen_with_norm(gen, normalize):\n    for x, y in gen:\n        yield normalize(x), y\n
\n\n

Then you might simply use gen_with_norm(datagen.flow, normalize) instead of datagen.flow.

\n\n

Moreover - you might recover the mean and std computed by a fit method by getting it from appropriate fields in datagen (e.g. datagen.mean and datagen.std).

\n", + "system": "" + }, + { + "instruction": "Why was Eigen chosen for TensorFlow?", + "input": "", + "output": "

I think that one of the key feature that drove the use of Eigen in the first place is because Eigen features its own highly optimized matrix product kernels whereas all other competitors have to be linked to some BLAS libraries. Moreover, the code of Eigen's product kernel is C++ with easy access to low-level internal kernels, so it was 'easy' for them to tweak and extend it to match their needs. This way Google has been able to develop the Tensor module with high CPU performance in a pure header-only fashion. The support for CUDA and now OpenCL via SyCL came later, those are not intrinsic features of Eigen that drove the initial choice.

\n", + "system": "" + }, + { + "instruction": "Scipy sparse CSR matrix to TensorFlow SparseTensor - Mini-Batch gradient descent", + "input": "", + "output": "

I can answer the first part of your question.

\n\n
def convert_sparse_matrix_to_sparse_tensor(X):\n    coo = X.tocoo()\n    indices = np.mat([coo.row, coo.col]).transpose()\n    return tf.SparseTensor(indices, coo.data, coo.shape)\n
\n\n

First you convert the matrix to COO format. Then you extract the indices, values, and shape and pass those directly to the SparseTensor constructor.

\n", + "system": "" + }, + { + "instruction": "EOFError: Compressed file ended before the end-of-stream marker was reached - MNIST data set", + "input": "", + "output": "

This is because for some reason you have an incomplete download for the MNIST dataset.

\n\n

You will have to manually delete the downloaded folder which usually resides in ~/.keras/datasets or any path specified by you relative to this path, in your case MNIST_data.

\n\n

Perform the following steps in the terminal (ctrl + alt + t):

\n\n
    \n
  1. cd ~/.keras/datasets/
  2. \n
  3. rm -rf \"dataset name\"
  4. \n
\n\n

You should be good to go!

\n", + "system": "" + }, + { + "instruction": "What's the difference between Variable and ResourceVariable in Tensorflow", + "input": "", + "output": "

ResourceVariable is the replacement for Variable, that aims to clean up some of the messier aspects of the semantics of Variable.

\n

ResourceVariable is the default in TF 2.0 and you very likely don't care about the differences between the two unless you are working on details deep inside the Tensorflow implementation. When eager execution is enabled tf.Variable also creates resource variables.

\n

So just use tf.Variable for now, it's almost certainly what you want; if you experience issues which look like race conditions or bugs from inconsistent values of variables in code you can try enabling resource variables (by either passing use_resource=True to your variable-creating code or calling tf.enable_resource_variables() in TF 1.x).

\n", + "system": "" + }, + { + "instruction": "Tensorflow. Converting unknown dimension size of a tensor to int", + "input": "", + "output": "

You have to use a Graph operation:

\n\n
a = tf.placeholder(tf.float32, shape=(None, 3072))\nb = tf.shape(a)[0]\n
\n\n

returns

\n\n
<tf.Tensor 'strided_slice:0' shape=() dtype=int32>\n
\n\n

while b = a.get_shape()[0]\nreturns

\n\n
Dimension(None)\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow: why 'pip uninstall tensorflow' cannot find tensorflow", + "input": "", + "output": "

It could be because you didn't install Tensorflow using pip, but using python setup.py develop instead as your link shows.

\n\n

pip uninstall is likely to fail if the package is installed using python setup.py install as they do not leave behind metadata to determine what files were installed.

\n\n

Therefore, you should be able to unistall Tensorflow with the option -u or --unistall of develop

\n\n
cd /home/AIJ/tensorflow/_python_build\npython setup.py develop --uninstall\n
\n\n

To answer for the second (interestring) question about the two dist-package created under /usr/lib/python2.7 and /usr/local/lib/python2.7 it exists already a great Stack Overflow answer on the topic.

\n\n

PS: Tensorflow is a good library, you should consider not uninstall it :)

\n", + "system": "" + }, + { + "instruction": "Configuring Tensorflow to use all CPU's", + "input": "", + "output": "

CPUs are used via a \"device\" which is just a threadpool. You can control the number of threads if you feel like you need more:

\n\n
sess = tf.Session(config=tf.ConfigProto(\n  intra_op_parallelism_threads=NUM_THREADS))\n
\n", + "system": "" + }, + { + "instruction": "what does x = tf.placeholder(tf.float32, [None, 784]) means?", + "input": "", + "output": "

From the tutorial: Deep MNIST for Experts\n

\n\n
\n

Here we assign it a shape of [None, 784], where 784 is the dimensionality of a single flattened 28 by 28 pixel MNIST image, and None indicates that the first dimension, corresponding to the batch size, can be of any size.

\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow strings: what they are and how to work with them", + "input": "", + "output": "

Unlike Python, where a string can be treated as a list of characters for the purposes of slicing and such, TensorFlow's tf.strings are indivisible values. For instance, x below is a Tensor with shape (2,) whose each element is a variable length string.

\n\n
x = tf.constant([\"This is a string\", \"This is another string\"])\n
\n\n

However, to achieve what you want, TensorFlow provides the tf.decode_raw operator. It takes a tf.string tensor as input, but can decode the string into any other primitive data type. For instance, to interpret the string as a tensor of characters, you can do the following :

\n\n
x = tf.constant(\"This is string\")\nx = tf.decode_raw(x, tf.uint8)\ny = x[:4]\nsess = tf.InteractiveSession()\nprint(y.eval())\n# prints [ 84 104 105 115]\n
\n", + "system": "" + }, + { + "instruction": "What is tf.nn.max_pool's ksize parameter used for?", + "input": "", + "output": "

The documentation states:

\n\n
\n

ksize: A list of ints that has length >= 4. The size of the window for each dimension of the input tensor.

\n
\n\n

In general for images, your input is of shape [batch_size, 64, 64, 3] for an RGB image of 64x64 pixels.

\n\n

The kernel size ksize will typically be [1, 2, 2, 1] if you have a 2x2 window over which you take the maximum. On the batch size dimension and the channels dimension, ksize is 1 because we don't want to take the maximum over multiple examples, or over multiples channels.

\n", + "system": "" + }, + { + "instruction": "What is a good explanation of how to read the histogram feature of TensorBoard?", + "input": "", + "output": "

The lines that they are talking about are described below:\n\"enter

\n\n

as for the meaning of percentile, check out the wikipedia article,\nbasically, the 93rd percentile means that 93% of the values are situated below the 93rd percentile line

\n", + "system": "" + }, + { + "instruction": "Keras : How should I prepare input data for RNN?", + "input": "", + "output": "

If you only want to predict the output using the most recent 5 inputs, there is no need to ever provide the full 600 time steps of any training sample. My suggestion would be to pass the training data in the following manner:

\n\n
             t=0  t=1  t=2  t=3  t=4  t=5  ...  t=598  t=599\nsample0      |---------------------|\nsample0           |---------------------|\nsample0                |-----------------\n...\nsample0                                         ----|\nsample0                                         ----------|\nsample1      |---------------------|\nsample1           |---------------------|\nsample1                |-----------------\n....\n....\nsample6751                                      ----|\nsample6751                                      ----------|\n
\n\n

The total number of training sequences will sum up to

\n\n
(600 - 4) * 6752 = 4024192    # (nb_timesteps - discarded_tailing_timesteps) * nb_samples\n
\n\n

Each training sequence consists of 5 time steps. At each time step of every sequence you pass all 13 elements of the feature vector. Subsequently, the shape of the training data will be (4024192, 5, 13).

\n\n

This loop can reshape your data:

\n\n\n\n
input = np.random.rand(6752,600,13)\nnb_timesteps = 5\n\nflag = 0\n\nfor sample in range(input.shape[0]):\n    tmp = np.array([input[sample,i:i+nb_timesteps,:] for i in range(input.shape[1] - nb_timesteps + 1)])\n\n    if flag==0:\n        new_input = tmp\n        flag = 1\n\n    else:\n        new_input = np.concatenate((new_input,tmp))\n
\n", + "system": "" + }, + { + "instruction": "How to train a RNN with LSTM cells for time series prediction", + "input": "", + "output": "

I'm just about to learn LSTMs in TensorFlow and try to implement an example which (luckily) tries to predict some time-series / number-series genereated by a simple math-fuction.

\n\n

But I'm using a different way to structure the data for training, motivated by Unsupervised Learning of Video Representations using LSTMs:

\n\n

LSTM Future Predictor Model

\n\n

Option 5:

\n\n
input data               label     \n1,2,3,4                  5,6,7,8\n2,3,4,5                  6,7,8,9\n3,4,5,6                  7,8,9,10\n...\n
\n\n

Beside this paper, I (tried) to take inspiration by the given TensorFlow RNN examples. My current complete solution looks like this:

\n\n
import math\nimport random\nimport numpy as np\nimport tensorflow as tf\n\nLSTM_SIZE = 64\nLSTM_LAYERS = 2\nBATCH_SIZE = 16\nNUM_T_STEPS = 4\nMAX_STEPS = 1000\nLAMBDA_REG = 5e-4\n\n\ndef ground_truth_func(i, j, t):\n    return i * math.pow(t, 2) + j\n\n\ndef get_batch(batch_size):\n    seq = np.zeros([batch_size, NUM_T_STEPS, 1], dtype=np.float32)\n    tgt = np.zeros([batch_size, NUM_T_STEPS], dtype=np.float32)\n\n    for b in xrange(batch_size):\n        i = float(random.randint(-25, 25))\n        j = float(random.randint(-100, 100))\n        for t in xrange(NUM_T_STEPS):\n            value = ground_truth_func(i, j, t)\n            seq[b, t, 0] = value\n\n        for t in xrange(NUM_T_STEPS):\n            tgt[b, t] = ground_truth_func(i, j, t + NUM_T_STEPS)\n    return seq, tgt\n\n\n# Placeholder for the inputs in a given iteration\nsequence = tf.placeholder(tf.float32, [BATCH_SIZE, NUM_T_STEPS, 1])\ntarget = tf.placeholder(tf.float32, [BATCH_SIZE, NUM_T_STEPS])\n\nfc1_weight = tf.get_variable('w1', [LSTM_SIZE, 1], initializer=tf.random_normal_initializer(mean=0.0, stddev=1.0))\nfc1_bias = tf.get_variable('b1', [1], initializer=tf.constant_initializer(0.1))\n\n# ENCODER\nwith tf.variable_scope('ENC_LSTM'):\n    lstm = tf.nn.rnn_cell.LSTMCell(LSTM_SIZE)\n    multi_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * LSTM_LAYERS)\n    initial_state = multi_lstm.zero_state(BATCH_SIZE, tf.float32)\n    state = initial_state\n    for t_step in xrange(NUM_T_STEPS):\n        if t_step > 0:\n            tf.get_variable_scope().reuse_variables()\n\n        # state value is updated after processing each batch of sequences\n        output, state = multi_lstm(sequence[:, t_step, :], state)\n\nlearned_representation = state\n\n# DECODER\nwith tf.variable_scope('DEC_LSTM'):\n    lstm = tf.nn.rnn_cell.LSTMCell(LSTM_SIZE)\n    multi_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * LSTM_LAYERS)\n    state = learned_representation\n    logits_stacked = None\n    loss = 0.0\n    for t_step in xrange(NUM_T_STEPS):\n        if t_step > 0:\n            tf.get_variable_scope().reuse_variables()\n\n        # state value is updated after processing each batch of sequences\n        output, state = multi_lstm(sequence[:, t_step, :], state)\n        # output can be used to make next number prediction\n        logits = tf.matmul(output, fc1_weight) + fc1_bias\n\n        if logits_stacked is None:\n            logits_stacked = logits\n        else:\n            logits_stacked = tf.concat(1, [logits_stacked, logits])\n\n        loss += tf.reduce_sum(tf.square(logits - target[:, t_step])) / BATCH_SIZE\n\nreg_loss = loss + LAMBDA_REG * (tf.nn.l2_loss(fc1_weight) + tf.nn.l2_loss(fc1_bias))\n\ntrain = tf.train.AdamOptimizer().minimize(reg_loss)\n\nwith tf.Session() as sess:\n    sess.run(tf.initialize_all_variables())\n\n    total_loss = 0.0\n    for step in xrange(MAX_STEPS):\n        seq_batch, target_batch = get_batch(BATCH_SIZE)\n\n        feed = {sequence: seq_batch, target: target_batch}\n        _, current_loss = sess.run([train, reg_loss], feed)\n        if step % 10 == 0:\n            print(\"@{}: {}\".format(step, current_loss))\n        total_loss += current_loss\n\n    print('Total loss:', total_loss)\n\n    print('### SIMPLE EVAL: ###')\n    seq_batch, target_batch = get_batch(BATCH_SIZE)\n    feed = {sequence: seq_batch, target: target_batch}\n    prediction = sess.run([logits_stacked], feed)\n    for b in xrange(BATCH_SIZE):\n        print(\"{} -> {})\".format(str(seq_batch[b, :, 0]), target_batch[b, :]))\n        print(\" `-> Prediction: {}\".format(prediction[0][b]))\n
\n\n

Sample output of this looks like this:

\n\n
### SIMPLE EVAL: ###\n# [input seq] -> [target prediction]\n#  `-> Prediction: [model prediction]  \n[  33.   53.  113.  213.] -> [  353.   533.   753.  1013.])\n `-> Prediction: [ 19.74548721  28.3149128   33.11489105  35.06603241]\n[ -17.  -32.  -77. -152.] -> [-257. -392. -557. -752.])\n `-> Prediction: [-16.38951683 -24.3657589  -29.49801064 -31.58583832]\n[ -7.  -4.   5.  20.] -> [  41.   68.  101.  140.])\n `-> Prediction: [ 14.14126873  22.74848557  31.29668617  36.73633194]\n...\n
\n\n

The model is a LSTM-autoencoder having 2 layers each.

\n\n

Unfortunately, as you can see in the results, this model does not learn the sequence properly. I might be the case that I'm just doing a bad mistake somewhere, or that 1000-10000 training steps is just way to few for a LSTM. As I said, I'm also just starting to understand/use LSTMs properly.\nBut hopefully this can give you some inspiration regarding the implementation.

\n", + "system": "" + }, + { + "instruction": "TensorFlow: cast a float64 tensor to float32", + "input": "", + "output": "

The short answer is that you can convert a tensor from tf.float64 to tf.float32 using the tf.cast() op:

\n\n
loss = tf.cast(loss, tf.float32)\n
\n\n

The longer answer is that this will not solve all of your problems with the optimizers. (The lack of support for tf.float64 is a known issue.) The optimizers require that all of the tf.Variable objects that you are trying to optimize must also have type tf.float32.

\n", + "system": "" + }, + { + "instruction": "Initializing tensorflow Variable with an array larger than 2GB", + "input": "", + "output": "

\nIt seems like the only option is to use a placeholder. The cleanest way I can find is to initialize to a placeholder directly:

\n\n
X_init = tf.placeholder(tf.float32, shape=(3000000, 300))\nX = tf.Variable(X_init)\n# The rest of the setup...\nsess.run(tf.initialize_all_variables(), feed_dict={X_init: model.syn0})\n
\n", + "system": "" + }, + { + "instruction": "Running TensorFlow on a Slurm Cluster?", + "input": "", + "output": "

It's relatively simple.

\n\n

Under the simplifying assumptions that you request one process per host, slurm will provide you with all the information you need in environment variables, specifically SLURM_PROCID, SLURM_NPROCS and SLURM_NODELIST.

\n\n

For example, you can initialize your task index, the number of tasks and the nodelist as follows:

\n\n
from hostlist import expand_hostlist\ntask_index  = int( os.environ['SLURM_PROCID'] )\nn_tasks     = int( os.environ['SLURM_NPROCS'] )\ntf_hostlist = [ (\"%s:22222\" % host) for host in\n                expand_hostlist( os.environ['SLURM_NODELIST']) ]  \n
\n\n

Note that slurm gives you a host list in its compressed format (e.g., \"myhost[11-99]\"), that you need to expand. I do that with module hostlist by \n Kent Engstr\u00f6m, available here https://pypi.python.org/pypi/python-hostlist

\n\n

At that point, you can go right ahead and create your TensorFlow cluster specification and server with the information you have available, e.g.:

\n\n
cluster = tf.train.ClusterSpec( {\"your_taskname\" : tf_hostlist } )\nserver  = tf.train.Server( cluster.as_cluster_def(),\n                           job_name   = \"your_taskname\",\n                           task_index = task_index )\n
\n\n

And you're set! You can now perform TensorFlow node placement on a specific host of your allocation with the usual syntax:

\n\n
for idx in range(n_tasks):\n   with tf.device(\"/job:your_taskname/task:%d\" % idx ):\n       ...\n
\n\n

A flaw with the code reported above is that all your jobs will instruct Tensorflow to install servers listening at fixed port 22222. If multiple such jobs happen to be scheduled to the same node, the second one will fail to listen to 22222.

\n\n

A better solution is to let slurm reserve ports for each job. You need to bring your slurm administrator on board and ask him to configure slurm so it allows you to ask for ports with the --resv-ports option. In practice, this requires asking them to add a line like the following in their slurm.conf:

\n\n
MpiParams=ports=15000-19999\n
\n\n

Before you bug your slurm admin, check what options are already configured, e.g., with:

\n\n
scontrol show config | grep MpiParams\n
\n\n

If your site already uses an old version of OpenMPI, there's a chance an option like this is already in place.

\n\n

Then, amend my first snippet of code as follows:

\n\n
from hostlist import expand_hostlist\ntask_index  = int( os.environ['SLURM_PROCID'] )\nn_tasks     = int( os.environ['SLURM_NPROCS'] )\nport        = int( os.environ['SLURM_STEP_RESV_PORTS'].split('-')[0] )\ntf_hostlist = [ (\"%s:%s\" % (host,port)) for host in\n                expand_hostlist( os.environ['SLURM_NODELIST']) ]  \n
\n\n

Good luck!

\n", + "system": "" } ] \ No newline at end of file