diff --git "a/data/dataset.json" "b/data/dataset.json" --- "a/data/dataset.json" +++ "b/data/dataset.json" @@ -37582,5 +37582,1757 @@ "input": "", "output": "

Depending on the format that outputs the function decrypt_func it is possible to use h5py for loading the decriypted stream and then use the keras.models.load_model function to load the model, which supports h5py.File object type as input model apart from your mentioned string, path to the saved model.

\n\n
with open('model.hdf5', 'rb') as f_hdl:\n    h5 = decrypt_func(f_hdl.read())\n    with h5py.File(h5, 'r') as h5_file:\n        model = keras.models.load_model(h5_file)\n
\n", "system": "" + }, + { + "instruction": "Extracting the top-k value-indices from a 1-D Tensor", + "input": "", + "output": "

As of pull request #496 Torch now includes a built-in API named torch.topk. Example:

\n\n
> t = torch.Tensor{9, 1, 8, 2, 7, 3, 6, 4, 5}\n\n-- obtain the 3 smallest elements\n> res = t:topk(3)\n> print(res)\n 1\n 2\n 3\n[torch.DoubleTensor of size 3]\n\n-- you can also get the indices in addition\n> res, ind = t:topk(3)\n> print(ind)\n 2\n 4\n 6\n[torch.LongTensor of size 3]\n\n-- alternatively you can obtain the k largest elements as follow\n-- (see the API documentation for more details)\n> res = t:topk(3, true)\n> print(res)\n 9\n 8\n 7\n[torch.DoubleTensor of size 3]\n
\n\n

At the time of writing the CPU implementation follows a sort and narrow approach (there are plans to improve it in the future). That being said an optimized GPU implementation for cutorch is currently being reviewed.

\n", + "system": "" + }, + { + "instruction": "itorch creates a python console, not a torch console", + "input": "", + "output": "

iTorch supports iPython v2.3 or above. Please see the required dependencies.\n You seem to have iPython v 0.1.2, maybe that's a reason you see this behavior.

\n", + "system": "" + }, + { + "instruction": "Trying to get simple Keras neural net example to work", + "input": "", + "output": "

The reason for the huge error is that your labels are not binary and very large, but the output of softmax is binary. For instance, if the label is 10000 but you can only predict something between 0 and 1, there will be huge error regardless of what you predict. Did you mean activation='linear' in the last layer, to do a regression? Or did you want to put your labels through a softmax at the end of get_data()?

\n", + "system": "" + }, + { + "instruction": "Add AUC as loss function for keras", + "input": "", + "output": "

AUC is not differentiable, so you can't use it as a loss function without some modification. There's been some work on algorithms to maximize AUC, but I'd recommend just using the regular cross-entropy / log likelihood loss.

\n", + "system": "" + }, + { + "instruction": "Cannot train a neural network solving XOR mapping", + "input": "", + "output": "

In your example, you have a Dense layer with 1 unit with a softmax activation. The value of such a unit will always be 1.0, so no information can flow from your inputs to your outputs, and the network won't do anything. Softmax is only really useful when you need to generate a prediction of a probability among n classes, where n is greater than 2.

\n\n

The other answers suggest changes to the code to make it work. Just removing activation='softmax' may be enough.

\n\n

Keras does generally work.

\n", + "system": "" + }, + { + "instruction": "How to use keras for XOR", + "input": "", + "output": "

If I increase the number of epochs in your code to 50000 it does often converge to the right answer for me, just takes a little while :)

\n\n

It does often get stuck, though. I get better convergence properties if I change your loss function to 'mean_squared_error', which is a smoother function.

\n\n

I get still faster convergence if I use the Adam or RMSProp optimizers. My final compile line, which works:

\n\n
model.compile(loss='mse', optimizer='adam')\n...\nmodel.fit(train_data, label, nb_epoch = 10000,batch_size = 4,verbose = 1,shuffle=True,show_accuracy = True)\n
\n", + "system": "" + }, + { + "instruction": "Recurrent neural layers in Keras", + "input": "", + "output": "
    \n
  1. Timesteps are a pretty bothering thing about Keras. Due to the fact that data you provide as an input to your LSTM must be a numpy array it is needed (at least for Keras version <= 0.3.3) to have a specified shape of data - even with a \"time\" dimension. You can only put a sequences which have a specified length as an input - and in case your inputs vary in a length - you should use either an artificial data to \"fill\" your sequences or use a \"stateful\" mode (please read carefully Keras documentation to understand what this approach means). Both solutions might be unpleasent - but it's a cost you pay that Keras is so simple :) I hope that in version 1.0.0 they will do something with that.

  2. \n
  3. There are two ways to apply norecurrent layers after LSTM ones:

    \n\n
  4. \n
  5. https://stats.stackexchange.com/questions/182775/what-is-an-embedding-layer-in-a-neural-network :)

  6. \n
\n", + "system": "" + }, + { + "instruction": "keras autoencoder not converging", + "input": "", + "output": "

I think Keras's Autoencoder implementation ties the weights of the encoder and decoder, whereas in your implementation, the encoder and decoder have separate weights. If your implementation is leading to much better performance on the test data, then it may indicate that un-tied weights may be needed for your problem.

\n", + "system": "" + }, + { + "instruction": "What is data type for Python Keras deep learning package?", + "input": "", + "output": "

Keras uses numpy arrays containing the theano.config.floatX floating point type. This can be configured in your .theanorc file.

\n\n

Typically, it will be float64 for CPU computations and float32 for GPU computations, although you can also set it to float32 when working on the CPU if you prefer.

\n\n

You can create a zero-filled array of the proper type by the command

\n\n
X = numpy.zeros((4,3), dtype=theano.config.floatX)\n
\n", + "system": "" + }, + { + "instruction": "the loss of mse always be 0 when keras for topic predict", + "input": "", + "output": "

First, is your output a one-hot vector of predicted classes? IE: class one is [1, 0, 0, ...] and class two is [0, 1, 0, 0, ...].

\n\n

If so, then using softmax activation at the output layer is acceptable and you are doing a classification problem. If you are doing a classification problem (one-hot output) you cannot use MSE as the loss, use categorical cross-entropy.

\n\n

Softmax scales the output so that the number given is a predicted probability of a certain class. Wikipedia here: https://en.wikipedia.org/wiki/Softmax_function

\n\n

If you are expecting the output vector to be real numbers then you need to use linear activation on your output neurons.

\n", + "system": "" + }, + { + "instruction": "Simplest Lstm training with Keras io", + "input": "", + "output": "

First problem I see is using Pandas Dataframe. I think you should use numpy array here. The second problem is the X matrix. It should be a 3D array. For example if I try with

\n\n
X_train = np.random.randn(6,2,2)\n
\n\n

then it will work.

\n", + "system": "" + }, + { + "instruction": "Gradient from Theano expression for filter visualization in Keras", + "input": "", + "output": "

You can print out the gradient as described here and hand-code it into Scipy. You can also do the optimization in Theano - see this question.

\n\n

However, probably the most straight-forward approach is to create a function get_gradients() that uses theano.grad() to return the gradients of the filters with respect to an input, then call scipy.optimize.minimize with jac=get_gradients. According to the documentation:

\n\n
\n

jac : bool or callable, optional Jacobian (gradient) of objective\n function. [...] jac can also be a callable returning the gradient of\n the objective. In this case, it must accept the same arguments as fun.

\n
\n", + "system": "" + }, + { + "instruction": "Error importing Keras layer", + "input": "", + "output": "

Make sure that pip is setup properly for the version of python which you are using.

\n\n

You can do for example

\n\n
curl -O https://bootstrap.pypa.io/get-pip.py\npython2.7 get-pip.py\n
\n\n

to re-install pip.

\n\n

and then:

\n\n
pip-2.7 install --upgrade git+git://github.com/fchollet/keras.git\n
\n", + "system": "" + }, + { + "instruction": "Scipy output error :undefined symbol: sgegv_", + "input": "", + "output": "

No need to downgrade your lapack. It is better to upgrade your scipy

\n\n
pip install git+https://github.com/scipy/scipy.git\n
\n", + "system": "" + }, + { + "instruction": "Trying Kaggle Titanic with keras .. getting loss and valid_loss -0.0000", + "input": "", + "output": "

Old post, but answering anyway in case someone else attempts Titanic with Keras.

\n\n

Your network may have too many parameters and too little regularization (e.g. dropout).

\n\n

Call model.summary() right before the model.compile and it will show you how many parameters your network has. Just between your two Dense layers you should have 512 X 512 = 262,144 paramters. That's a lot for 762 examples.

\n\n

Also you may want to use a sigmoid activation on the last layer and binary_cross entropy loss as you only have two output classes.

\n", + "system": "" + }, + { + "instruction": "Python: keras shape mismatch error", + "input": "", + "output": "

I had the same problem and then found this thread;

\n\n

https://github.com/fchollet/keras/issues/68

\n\n

It appears for you to state a final output layer of 2 or for any number of categories the labels need to be of a categorical type where essentially this is a binary vector for each observation e.g a 3 class output vector [0,2,1,0,1,0] becomes [[1,0,0],[0,0,1],[0,1,0],[1,0,0],[0,1,0],[1,0,0]].

\n\n

The np_utils.to_categorical function solved this for me;

\n\n
from keras.utils import np_utils, generic_utils\n\ny_train, y_test = [np_utils.to_categorical(x) for x in (y_train, y_test)]\n
\n\n", + "system": "" + }, + { + "instruction": "keras example Type Error", + "input": "", + "output": "

The examples http://keras.io/examples/ have not been updated after the recent API changes https://groups.google.com/forum/#!topic/keras-users/iWVrWpR_eaQ.

\n\n

Make sure you have the latest version of Keras installed with:

\n\n
sudo pip install git+git://github.com/fchollet/keras.git --upgrade\n
\n\n

and use updated examples from the same repository https://github.com/fchollet/keras/tree/master/examples

\n", + "system": "" + }, + { + "instruction": "Keras Convolution Neural Network", + "input": "", + "output": "

You are correct that the code is expecting a tensor4. The conventional structure is (batch, channel, width, height). In this case the images are monochrome so channel=1 It looks like you're using a batch size of 10 and the MNIST images are 28 pixels in width and 28 pixels in height.

\n\n

You can simply reshape the data into the format required. If x is of shape (10, 784) then x.reshape(10, 1, 28, 28) will have the required format.

\n", + "system": "" + }, + { + "instruction": "keras dense input layer", + "input": "", + "output": "

Just found the solution...

\n\n
model = Sequential()\nmodel.add(Flatten(input_shape=(img_channels, img_rows, img_cols)))\nmodel.add(Dense(1000))\nmodel.add(Activation('relu'))\n
\n\n

It was obvious though. :(

\n", + "system": "" + }, + { + "instruction": "Auto-encoder to reduce input data size", + "input": "", + "output": "

Autoencoders/Variational Autoencoders does not learn about sequences, it learns to \"map\" the input data to a latent space which has fewer dimensions. For example if the image is 64x64x3 you could map that to a 32 dim tensor/array.

\n\n

For learning a sequence of images, you would need to connect the output of the autoencoder encoder part to a RNN (LSTM/GRU) which could learn about the sequence of the encoded frames (consecutive frames in latent space). After that, the output of the RNN could connect to the decoder part of the autoencoder so you could see the reconstructed frames.

\n\n

Here you can find a GitHub project which tries to encode the video frames and then predict sequences

\n", + "system": "" + }, + { + "instruction": "Python keras neural network (Theano) package returns an error about data dimensions", + "input": "", + "output": "

You specified the wrong output dimensions for your internal layers. See for instance this example from the Keras documentation:

\n\n
model = Sequential()\nmodel.add(Dense(20, 64, init='uniform'))\nmodel.add(Activation('tanh'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(64, 64, init='uniform'))\nmodel.add(Activation('tanh'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(64, 2, init='uniform'))\nmodel.add(Activation('softmax'))\n
\n\n

Note how the output size of one layer matches the input size of the next one:

\n\n
20x64 -> 64x64 -> 64x2\n
\n\n

The first number is always the input size (number of neurons on the previous layer), the second number the output size (number of neurons on the next layer). So in this example you have four layers:

\n\n\n\n

The only hard restriction you have is that the first (input) layer needs to have as many neurons as you have features, and the last (output) layer needs to have as many neurons as you need for your task.

\n\n

For your example, since you have three features, you need to change the input layer size to 3, and you can keep the two output neurons from this example to do binary classification (or use one, as you did, with logistic loss).

\n", + "system": "" + }, + { + "instruction": "Keras LSTM predicting only 1 category, in multi-category classification - how to fix?", + "input": "", + "output": "

You'll need to modify the with

\n\n
model.add(Dense(nb_classes))\n
\n\n

where nb_classes corresponds to the number of categorical classes.

\n", + "system": "" + }, + { + "instruction": "Testing the Keras sentiment classification with model.predict", + "input": "", + "output": "

So what you basically need to do is as follows:

\n\n
    \n
  1. Tokenize sequnces: convert the string into words (features): For example: \"hello my name is georgio\" to [\"hello\", \"my\", \"name\", \"is\", \"georgio\"].
  2. \n
  3. Next, you want to remove stop words (check Google for what stop words are).
  4. \n
  5. This stage is optional, it may lead to faulty results but I think it worth a try. Stem your words (features), that way you'll reduce the number of features which will lead to a faster run. Again, that's optional and might lead to some failures, for example: if you stem the word 'parking' you get 'park' which has a different meaning.
  6. \n
  7. Next thing is to create a dictionary (check Google for that). Each word gets a unique number and from this point we will use this number only.
  8. \n
  9. Computers understand numbers only so we need to talk in their language. We'll take the dictionary from stage 4 and replace each word in our corpus with its matching number.
  10. \n
  11. Now we need to split our data set to two groups: training and testing sets. One (training) will train our NN model and the second (testing) will help us to figure out how good is our NN. You can use Keras' cross validation function.
  12. \n
  13. Next thing is defining whats the max number of features our NN can get as an input. Keras call this parameter - 'maxlen'. But you don't really have to do this manually, Keras can do that automatically just by searching for the longest sentence you have in your corpus.
  14. \n
  15. Next, let's say that Keras found out that the longest sentence in your corpus has 20 words (features) and one of your sentences is the example in the first stage, which its length is 5 (if we'll remove stop words it'll be shorter), in such case we'll need to add zeros, 15 zeros actually. This is called pad sequence, we do that so every input sequence will be in the same length.
  16. \n
\n", + "system": "" + }, + { + "instruction": "Keras/Theano - how to multiply by vector in Lambda layer", + "input": "", + "output": "

I think you are over-complicating things. If you want to multiply with a matrix filled with constants, you can simply divide by a scalar which is then broadcasted over your input matrix, e.g.

\n\n
Lambda(lambda x: x / M)\n
\n\n

where M can be defined as

\n\n
from keras import backend as K\nM = K.shape(x)[0]\n
\n\n

giving

\n\n
Lambda(lambda x: x / K.shape(M)[0])\n
\n", + "system": "" + }, + { + "instruction": "Image plotting - after processing", + "input": "", + "output": "

You can save the generated images to the disk by giving save_to_dir='path_to_dir' to the flow() function of the data generator.

\n", + "system": "" + }, + { + "instruction": "Doesn't work example with Keras framework", + "input": "", + "output": "

Check this line in your code

\n\n
model.add(Dense(64, input_dim=20, init='uniform'))\n
\n\n

Why 20 input dimensions? MNIST has 28X28 images, i.e. an input dimension of 784. The error message confirms that as well:

\n\n
ValueError: ('shapes (9,784) and (20,64) not aligned: 784 (dim 1) != 20 (dim 0)', (9L, 784L), (20L, 64L))\n
\n\n

You can further verify the size of your input

\n\n
print \"Size of X_train\", x_train.shape\nprint \"Size of X_test\", x_test.shape \n
\n\n

And accordingly change the line above to:

\n\n
model.add(Dense(64, input_dim=784, init='uniform'))\n
\n", + "system": "" + }, + { + "instruction": "Linux error when installing Keras", + "input": "", + "output": "

You need to install the hdf5 package to get the headers you need.

\n", + "system": "" + }, + { + "instruction": "Keras / Theano: How to add Convolution2D Layers?", + "input": "", + "output": "

You need to correct the output shape for the convolutional layer. Output of a CNN layer depends on many factors such as input size, number of kernels, stride and padding. Generally for an input of size BxCxW1xH1, the output would be BxFxW2xH2 where B is the batch size, C is the input channels, F is the number of output features, W1xH1 is the input size and you can compute the value of W2 and H2 using W1, H1, stride and padding. It is illustrated very well in this tutorial from Stanford: http://cs231n.github.io/convolutional-networks/#comp

\n\n

Hope it helps!

\n", + "system": "" + }, + { + "instruction": "Error - Theano+Keras", + "input": "", + "output": "

When you change your border_mode from \"same\" to \"valid\", the dimension of your feature map will gradually shrink. At some point, the dimension of a layer becomes so small (looks like it became 1x1 already in your case) that you can't add extra layers on it anymore.

\n\n

If you are specifying certain strides of a layer, the size of the filter and the size of the input also need to be compatible with your stride length.

\n", + "system": "" + }, + { + "instruction": "Using screen session with Theano - race conditions", + "input": "", + "output": "

My guess is that your home directory is on a networked filesystem of some kind (e.g. AFS). If so, as soon as you end the session the filesystem security credentials are invalidated and the process, though it continues to run inside the screen, no longer has permission to work with files in the Theano cache directory ~/.theano. If this guess is correct then the problem is not a race condition.

\n\n

If the problem relates to AFS credential expiry then a solution is to use a credential cache with the kinit command (see the -c option in http://web.mit.edu/kerberos/krb5-1.12/doc/user/user_commands/kinit.html).

\n", + "system": "" + }, + { + "instruction": "Python neural network accuracy - correct implementation?", + "input": "", + "output": "

I don't know that library, so I can't tell you if this is correctly implemented, but it looks legit.

\n\n

I think your problem lies with activation function - tanh(500)=1 and tanh(1)=0.76. This difference seem too small for me. Try using -1 instead of 500 for testing purposes and normalize your real data to something about [-2, 2]. If you need full real numbers range, try using linear activation function. If you only care about positive half on real numbers, I propose softplus or ReLU. I've checked and all those functions are provided with Keras.

\n\n

You can try thresholding your output too - answer 0.75 when expecting 1 and 0.25 when expecting 0 are valid, but may impact you accuracy.

\n\n

Also, try tweaking your parameters. I can propose (basing on my own experience) that you'd use:

\n\n\n\n

I'd say that learning rate, number of epochs, momentum and lambda are the most important factors here - in order from most to least important.

\n\n

PS. I've just spotted that you're initializing your weights uniformly (is that even a word? I'm not a native speaker...). I can't tell you why, but my intuition tells me that this is a bad idea. I'd go with random initial weights.

\n", + "system": "" + }, + { + "instruction": "Neural network dimension mis-match", + "input": "", + "output": "

I've been training 2 class classification models for so long that I'm used to dealing with labels that are just single values. For this problem (classifying more than 1 outcome) I just had to change the labels to be vectors themselves.

\n\n

This solved my problem:

\n\n
from keras.utils.np_utils import to_categorical\n\nlabels_train = to_categorical(labels_train)\n
\n", + "system": "" + }, + { + "instruction": "Error - Keras+Theano - loss function", + "input": "", + "output": "

Somehow you're passing a symbolic value for the border_mode parameter. If this works fine on CPU but not on GPU then, for some reason, the CPU version of the code supports symbolic border modes but the GPU version does not.

\n\n

If you can, change the border_mode parameter value to be a Python literal instead of a Theano symbolic variable.

\n", + "system": "" + }, + { + "instruction": "Tensorflow image reading & display", + "input": "", + "output": "

Just to give a complete answer:

\n\n
filename_queue = tf.train.string_input_producer(['/Users/HANEL/Desktop/tf.png']) #  list of files to read\n\nreader = tf.WholeFileReader()\nkey, value = reader.read(filename_queue)\n\nmy_img = tf.image.decode_png(value) # use png or jpg decoder based on your files.\n\ninit_op = tf.global_variables_initializer()\nwith tf.Session() as sess:\n  sess.run(init_op)\n\n  # Start populating the filename queue.\n\n  coord = tf.train.Coordinator()\n  threads = tf.train.start_queue_runners(coord=coord)\n\n  for i in range(1): #length of your filename list\n    image = my_img.eval() #here is your image Tensor :) \n\n  print(image.shape)\n  Image.fromarray(np.asarray(image)).show()\n\n  coord.request_stop()\n  coord.join(threads)\n
\n\n

Or if you have a directory of images you can add them all via this Github source file

\n\n

@mttk and @salvador-dali: I hope it is what you need

\n", + "system": "" + }, + { + "instruction": "Why is this TensorFlow implementation vastly less successful than Matlab's NN?", + "input": "", + "output": "

I tried training for 50000 iterations it got to 0.00012 error. It takes about 180 seconds on Tesla K40.

\n\n

\"enter

\n\n

It seems that for this kind of problem, first order gradient descent is not a good fit (pun intended), and you need Levenberg\u2013Marquardt or l-BFGS. I don't think anyone implemented them in TensorFlow yet.

\n\n

Edit\nUse tf.train.AdamOptimizer(0.1) for this problem. It gets to 3.13729e-05 after 4000 iterations. Also, GPU with default strategy also seems like a bad idea for this problem. There are many small operations and the overhead causes GPU version to run 3x slower than CPU on my machine.

\n", + "system": "" + }, + { + "instruction": "Tensorflow causes logging messages to double", + "input": "", + "output": "

I get this output:

\n\n
test\nWARNING:TEST:test\n
\n\n

Tensorflow is also using the logging framework and has set up its own handlers, so when you log, by default, it propagates up to the parent logging handlers inside tensorflow. You can change this behavior by setting:

\n\n
logger.propagate = False\n
\n\n

See also duplicate output in simple python logging configuration

\n\n

Followup: This was an unintended side-effect of the way tensorflow was using the logging package. I've changed it at HEAD to scope its internal loggers under the name \"tensorflow\" to avoid this pollution. Should be in the github head within a day or so. In the meantime, the logger.propagate solution will work and won't break once that fix is in, so you should be safe to go. Thanks again for spotting this!

\n\n

Followup-Followup:\nStarting with TensorFlow 1.14 exposes the logger directly:

\n\n
import tensorflow as tf\n\nlogger = tf.get_logger()\n
\n", + "system": "" + }, + { + "instruction": "Sentiment Analysis using tensorflow", + "input": "", + "output": "

A commonly used approach would be using a Convolutional Neural Network (CNN) to do sentiment analysis. You can find a great explanation/tutorial in this WildML blogpost. The accompanying TensorFlow code can be found here.

\n\n

Another approach would be using an LSTM (or related network), you can find example implementations online, a good starting point is this blogpost.

\n", + "system": "" + }, + { + "instruction": "Tensorflow (python): "ValueError: setting an array element with a sequence" in train_step.run(...)", + "input": "", + "output": "

This particular error is coming out of numpy. Calling np.array on a sequence with a inconsistant dimensions can throw it.

\n\n
>>> np.array([1,2,3,[4,5,6]])\n\nValueError: setting an array element with a sequence.\n
\n\n

It looks like it's failing at the point where tf ensures that all the elements of the feed_dict are numpy.arrays.

\n\n

Check your feed_dict.

\n", + "system": "" + }, + { + "instruction": "Is TensorFlow suitable for Recommendation Systems", + "input": "", + "output": "

Tensorflow is great for deep learning, or training large neural nets. Although, it can be used for several other mathematical applications such as PDEs, various classifiers, recommendation systems etc, there doesn't seem to have a lot of support for them as yet.

\n\n

This reddit thread might be a good place to start for searching libraries which are centred around recommendation systems.

\n", + "system": "" + }, + { + "instruction": "Why does TensorFlow example fail when increasing batch size?", + "input": "", + "output": "

You're using the very basic linear model in the beginners example?

\n\n

Here's a trick to debug it - watch the cross-entropy as you increase the batch size (the first line is from the example, the second I just added):

\n\n
cross_entropy = -tf.reduce_sum(y_*tf.log(y))\ncross_entropy = tf.Print(cross_entropy, [cross_entropy], \"CrossE\")\n
\n\n

At a batch size of 204, you'll see:

\n\n
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[92.37558]\nI tensorflow/core/kernels/logging_ops.cc:64] CrossE[90.107414]\n
\n\n

But at 205, you'll see a sequence like this, from the start:

\n\n
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[472.02966]\nI tensorflow/core/kernels/logging_ops.cc:64] CrossE[475.11697]\nI tensorflow/core/kernels/logging_ops.cc:64] CrossE[1418.6655]\nI tensorflow/core/kernels/logging_ops.cc:64] CrossE[1546.3833]\nI tensorflow/core/kernels/logging_ops.cc:64] CrossE[1684.2932]\nI tensorflow/core/kernels/logging_ops.cc:64] CrossE[1420.02]\nI tensorflow/core/kernels/logging_ops.cc:64] CrossE[1796.0872]\nI tensorflow/core/kernels/logging_ops.cc:64] CrossE[nan]\n
\n\n

Ack - NaN's showing up. Basically, the large batch size is creating such a huge gradient that your model is spiraling out of control -- the updates it's applying are too large, and overshooting the direction it should go by a huge margin.

\n\n

In practice, there are a few ways to fix this. You could reduce the learning rate from .01 to, say, .005, which results in a final accuracy of 0.92.

\n\n
train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)\n
\n\n

Or you could use a more sophisticated optimization algorithm (Adam, Momentum, etc.) that tries to do more to figure out the direction of the gradient. Or you could use a more complex model that has more free parameters across which to disperse that big gradient.

\n", + "system": "" + }, + { + "instruction": "TensorFlow MNIST example not running with fully_connected_feed.py", + "input": "", + "output": "

This is a Python path issue. Assuming that the directory tensorflow/g3doc/tutorials/mnist is your current working directory (or in your Python path), the easiest way to resolve it is to change the following lines in fully_connected_feed.py from:

\n\n
from tensorflow.g3doc.tutorials.mnist import input_data\nfrom tensorflow.g3doc.tutorials.mnist import mnist\n
\n\n

...to:

\n\n
import input_data\nimport mnist\n
\n", + "system": "" + }, + { + "instruction": "where is the ./configure of TensorFlow and how to enable the GPU support?", + "input": "", + "output": "

This is a bash script which suppose to be in

\n\n
\n

the root of your source tree

\n
\n\n

when you cloned the repo. Here it is https://github.com/tensorflow/tensorflow/blob/master/configure

\n", + "system": "" + }, + { + "instruction": "Tensorflow slicing based on variable", + "input": "", + "output": "

Slicing based on a placeholder should work just fine. It looks like you are running into a type error, due to some subtle issues of shapes and types. Where you have the following:

\n\n
x = tf.placeholder(\"float\")\ni = tf.placeholder(\"int32\")\ny = tf.slice(x,[i],[1])\n
\n\n

...you should instead have:

\n\n
x = tf.placeholder(\"float\")\ni = tf.placeholder(\"int32\")\ny = tf.slice(x,i,[1])\n
\n\n

...and then you should feed i as [0] in the call to sess.run().

\n\n

To make this a little clearer, I would recommend rewriting the code as follows:

\n\n
import tensorflow as tf\nimport numpy as np\n\nx = tf.placeholder(tf.float32, shape=[None])  # 1-D tensor\ni = tf.placeholder(tf.int32, shape=[1])\ny = tf.slice(x, i, [1])\n\n#initialize\ninit = tf.initialize_all_variables()\nsess = tf.Session()\nsess.run(init)\n\n#run\nresult = sess.run(y, feed_dict={x: [1, 2, 3, 4, 5], i: [0]})\nprint(result)\n
\n\n

The additional shape arguments to the tf.placeholder op help to ensure that the values you feed have the appropriate shapes, and also that TensorFlow will raise an error if the shapes are not correct.

\n", + "system": "" + }, + { + "instruction": "How do I use distributed DNN training in TensorFlow?", + "input": "", + "output": "

Updated:

\n\n\n\n

The release occurred on 2/26/2016 and was announced by coauthor Derek Murray in the original issue here and uses gRPC for inter-process communication.

\n\n

Previous:

\n\n

Before the update above, a distributed implementation of TensorFlow had not been released yet. Support for a distributed implementation was the topic of this issue where coauthor Vijay Vasudevan wrote:

\n\n
\n

we are working on making a distributed implementation available, it's\n currently not in the initial release

\n
\n\n

and Jeff Dean later provided an update:

\n\n
\n

Our current internal distributed extensions are somewhat entangled\n with Google internal infrastructure, which is why we released the\n single-machine version first. The code is not yet in GitHub, because\n it has dependencies on other parts of the Google code base at the\n moment, most of which have been trimmed, but there are some remaining\n ones.

\n \n

We realize that distributed support is really important, and it's one\n of the top features we're prioritizing at the moment.

\n
\n", + "system": "" + }, + { + "instruction": "Predicting next word using the language model tensorflow example", + "input": "", + "output": "

Your output is a TensorFlow list and it is possible to get its max argument (the predicted most probable class) with a TensorFlow function. This is normally the list that contains the next word's probabilities.

\n\n

At \"Evaluate the Model\" from this page, your output list is y in the following example:

\n\n
\n

First we'll figure out where we predicted the correct label. tf.argmax\n is an extremely useful function which gives you the index of the\n highest entry in a tensor along some axis. For example, tf.argmax(y,1)\n is the label our model thinks is most likely for each input, while\n tf.argmax(y_,1) is the true label. We can use tf.equal to check if our\n prediction matches the truth.\n correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))

\n
\n\n

Another approach that is different is to have pre-vectorized (embedded/encoded) words. You could vectorize your words (therefore embed them) with Word2vec to accelerate learning, you might want to take a look at this. Each word could be represented as a point in a 300 dimensions space of meaning, and you could find automatically the \"N words\" closest to the predicted point in space at the output of the network. In that case, the argmax way to proceed does not work anymore and you could probably compare on cosine similarity with the words you truly wanted to compare to, but for that I am not sure actually how does this could cause numerical instabilities. In that case y will not represent words as features, but word embeddings over a dimensionality of, let's say, 100 to 2000 in size according to different models. You could Google something like this for more info: \"man woman queen word addition word2vec\" to understand the subject of embeddings more.

\n\n

Note: when I talk about word2vec here, it is about using an external pre-trained word2vec model to help your training to only have pre-embedded inputs and create embedding outputs. Those outputs' corresponding words can be re-figured out by word2vec to find the corresponding similar top predicted words.

\n\n

Notice that the approach I suggest is not exact since it would be only useful to know if we predict EXACTLY the word that we wanted to predict. For a more soft approach, it would be possible to use ROUGE or BLEU metrics for evaluating your model in case you use sentences or something longer than a word.

\n", + "system": "" + }, + { + "instruction": "Tensorflow on Raspberry Pi", + "input": "", + "output": "

Raspberry Pi is a 32-bit architecture, and the TensorFlow program was made for a 64-bit. \nI am not sure how well A Raspberry Pi can handle it, but you could install a virtual machine and run the TensorFlow inside there.

\n\n

Maybe the newer generations of the Rasberry Pi could handle this a bit better, but the original ones, will probably struggle to run this setup.

\n\n

You could do the training on a more powerful machine and just run the learned machine on the Rasberry Pi, that might be feasible in terms of computing power.

\n", + "system": "" + }, + { + "instruction": "Get the diagonal of a matrix in TensorFlow", + "input": "", + "output": "

with tensorflow 0.8 its possible to extract the diagonal elements with tf.diag_part() (see documentation)

\n\n

UPDATE

\n\n

for tensorflow >= r1.12 its tf.linalg.tensor_diag_part (see documentation)

\n", + "system": "" + }, + { + "instruction": "Tensorflow successfully installs on mac but gets ImportError on copyreg when used", + "input": "", + "output": "

You can upgrade to six-1.10.x using

\n\n
easy_install -U six\n
\n\n

This will upgrade the current version of six from 1.4 to 1.10.x, that is required by tensorflow.

\n", + "system": "" + }, + { + "instruction": "sum over a list of tensors in tensorflow", + "input": "", + "output": "

The standard way to sum a list of tensors is to use the tf.add_n() operation, which takes a list of tensors (each having the same size and shape) and produces a single tensor containing the sum.

\n\n

For the particular problem that you have, I am assuming that each layers[j].weights could have a different size. Therefore you will need reduce each element down to a scalar before summing, e.g. using the tf.nn.l2_loss() function itself:

\n\n
weights = [layers[j].weights for j in range(self.n_layers)]\nlosses = [tf.nn.l2_loss(w) for w in weights]\ntotal_loss = tf.add_n(losses)\n
\n\n

(Note however that when the values to be added are large, you may find it more efficient to calculate a sequence of tf.add() operations, since TensorFlow keeps the values of each of the add_n arguments in memory until all of them have been computed. A chain of add ops allows some of the computation to happen earlier.)

\n", + "system": "" + }, + { + "instruction": "Why does TensorFlow return [[nan nan]] instead of probabilities from a CSV file?", + "input": "", + "output": "

I don't know the direct answer, but I know how I'd approach debugging it: tf.Print. It's an op that prints the value as tensorflow is executing, and returns the tensor for further computation, so you can just sprinkle them inline in your model.

\n\n

Try throwing in a few of these. Instead of this line:

\n\n
tf_softmax = tf.nn.softmax(tf.matmul(tf_in,tf_weight) + tf_bias)\n
\n\n

Try:

\n\n
tf_bias = tf.Print(tf_bias, [tf_bias], \"Bias: \")\ntf_weight = tf.Print(tf_weight, [tf_weight], \"Weight: \")\ntf_in = tf.Print(tf_in, [tf_in], \"TF_in: \")\nmatmul_result = tf.matmul(tf_in, tf_weight)\nmatmul_result = tf.Print(matmul_result, [matmul_result], \"Matmul: \")\ntf_softmax = tf.nn.softmax(matmul_result + tf_bias)\n
\n\n

to see what Tensorflow thinks the intermediate values are. If the NaNs are showing up earlier in the pipeline, it should give you a better idea of where the problem lies. Good luck! If you get some data out of this, feel free to follow up and we'll see if we can get you further.

\n\n

Updated to add: Here's a stripped-down debugging version to try, where I got rid of the input functions and just generated some random data:

\n\n
import tensorflow as tf\nimport numpy as np\n\ndef dense_to_one_hot(labels_dense, num_classes=10):\n  \"\"\"Convert class labels from scalars to one-hot vectors.\"\"\"\n  num_labels = labels_dense.shape[0]\n  index_offset = np.arange(num_labels) * num_classes\n  labels_one_hot = np.zeros((num_labels, num_classes))\n  labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1\n  return labels_one_hot\n\nx_train=np.random.normal(0, 1, [50,10])\ny_train=np.random.randint(0, 10, [50])\ny_train_onehot = dense_to_one_hot(y_train, 10)\n\nx_test=np.random.normal(0, 1, [50,10])\ny_test=np.random.randint(0, 10, [50])\ny_test_onehot = dense_to_one_hot(y_test, 10)\n\n#  A number of features, 4 in this example\n#  B = 3 species of Iris (setosa, virginica and versicolor)\n\nA=10\nB=10\ntf_in = tf.placeholder(\"float\", [None, A]) # Features\ntf_weight = tf.Variable(tf.zeros([A,B]))\ntf_bias = tf.Variable(tf.zeros([B]))\ntf_softmax = tf.nn.softmax(tf.matmul(tf_in,tf_weight) + tf_bias)\n\ntf_bias = tf.Print(tf_bias, [tf_bias], \"Bias: \")\ntf_weight = tf.Print(tf_weight, [tf_weight], \"Weight: \")\ntf_in = tf.Print(tf_in, [tf_in], \"TF_in: \")\nmatmul_result = tf.matmul(tf_in, tf_weight)\nmatmul_result = tf.Print(matmul_result, [matmul_result], \"Matmul: \")\ntf_softmax = tf.nn.softmax(matmul_result + tf_bias)\n\n# Training via backpropagation\ntf_softmax_correct = tf.placeholder(\"float\", [None,B])\ntf_cross_entropy = -tf.reduce_sum(tf_softmax_correct*tf.log(tf_softmax))\n\n# Train using tf.train.GradientDescentOptimizer\ntf_train_step = tf.train.GradientDescentOptimizer(0.01).minimize(tf_cross_entropy)\n\n# Add accuracy checking nodes\ntf_correct_prediction = tf.equal(tf.argmax(tf_softmax,1), tf.argmax(tf_softmax_correct,1))\ntf_accuracy = tf.reduce_mean(tf.cast(tf_correct_prediction, \"float\"))\n\nprint tf_correct_prediction\nprint tf_accuracy\n\ninit = tf.initialize_all_variables()\nsess = tf.Session()\nsess.run(init)\n\nfor i in range(1):\n    print \"Running the training step\"\n    sess.run(tf_train_step, feed_dict={tf_in: x_train, tf_softmax_correct: y_train_onehot})\n    #print y_train_onehot\n    #saver.save(sess, 'trained_csv_model')\n\n    print \"Running the eval step\"\n    ans = sess.run(tf_softmax, feed_dict={tf_in: x_test})\n    print ans\n
\n\n

You should see lines starting with \"Bias: \", etc.

\n", + "system": "" + }, + { + "instruction": "How do I change the dtype in TensorFlow for a csv file?", + "input": "", + "output": "

The interface to tf.decode_csv() is a little tricky. The dtype of each column is determined by the corresponding element of the record_defaults argument. The value for record_defaults in your code is interpreted as each column having tf.int32 as its type, which leads to an error when it encounters floating-point data.

\n\n

Let's say you have the following CSV data, containing three integer columns, followed by a floating point column:

\n\n
4, 8, 9, 4.5\n2, 5, 1, 3.7\n2, 2, 2, 0.1\n
\n\n

Assuming all of the columns are required, you would build record_defaults as follows:

\n\n
value = ...\n\nrecord_defaults = [tf.constant([], dtype=tf.int32),    # Column 0\n                   tf.constant([], dtype=tf.int32),    # Column 1\n                   tf.constant([], dtype=tf.int32),    # Column 2\n                   tf.constant([], dtype=tf.float32)]  # Column 3\n\ncol0, col1, col2, col3 = tf.decode_csv(value, record_defaults=record_defauts)\n\nassert col0.dtype == tf.int32\nassert col1.dtype == tf.int32\nassert col2.dtype == tf.int32\nassert col3.dtype == tf.float32\n
\n\n

An empty value in record_defaults signifies that the value is required. Alternatively, if (e.g.) column 2 is allowed to have missing values, you would define record_defaults as follows:

\n\n
record_defaults = [tf.constant([], dtype=tf.int32),     # Column 0\n                   tf.constant([], dtype=tf.int32),     # Column 1\n                   tf.constant([0], dtype=tf.int32),    # Column 2\n                   tf.constant([], dtype=tf.float32)]   # Column 3\n
\n\n

The second part of your question concerns how to build and train a model that predicts the value of one of the columns from the input data. Currently, the program doesn't: it simply concatenates the columns into a single tensor, called features. You will need to define and train a model, that interprets that data. One of the simplest such approaches is linear regression, and you might find this tutorial on linear regression in TensorFlow adaptable to your problem.

\n", + "system": "" + }, + { + "instruction": "Getting: tensorflow is not a supported wheel on this platform", + "input": "", + "output": "

I guess pip3 is being used for installation
\nit can be solved by using pip2.7

\n\n

I followed the steps in here

\n\n

hope it helps you:)

\n", + "system": "" + }, + { + "instruction": "Translating a TensorFlow LSTM into synapticjs", + "input": "", + "output": "\n\n

Internally, the LSTMCell class stores the LSTM weights as a one big matrix instead of 8 smaller ones for efficiency purposes. It is quite easy to divide it horizontally and vertically to get to the more conventional representation. However, it might be easier and more efficient if your library does the similar optimization.

\n\n

Here is the relevant piece of code of the BasicLSTMCell:

\n\n
concat = linear([inputs, h], 4 * self._num_units, True)\n\n# i = input_gate, j = new_input, f = forget_gate, o = output_gate\ni, j, f, o = array_ops.split(1, 4, concat)\n
\n\n

The linear function does the matrix multiplication to transform the concatenated input and the previous h state into 4 matrices of [batch_size, self._num_units] shape. The linear transformation uses a single matrix and bias variables that you're referring to in the question. The result is then split into different gates used by the LSTM transformation.

\n\n

If you'd like to explicitly get the transformations for each gate, you can split that matrix and bias into 4 blocks. It is also quite easy to implement it from scratch using 4 or 8 linear transformations.

\n", + "system": "" + }, + { + "instruction": "Error using dropout in tensorflow", + "input": "", + "output": "

This is a bug in the implementation of tf.nn.dropout that was fixed in a recent commit, and will be included in the next release of TensorFlow. For now, to avoid the issue, either build TensorFlow from source, or modify your program as follows:

\n\n
#dx = tf.nn.dropout(x, keep_prob)\ndx = tf.nn.dropout(tf.identity(x), keep_prob)\n
\n", + "system": "" + }, + { + "instruction": "Tensor Flow Explicit Device Requirement Error", + "input": "", + "output": "

Could not satisfy explicit device specification means you do not have the corresponding device. Do you actually have a CUDA-enabled GPU on your machine?

\n\n

UPDATE: As it turned out in the discussion below, this error is also raised if the particular operation (in this case, RandomShuffleQueue) cannot be executed on the GPU, because it only has a CPU implementation.

\n\n

If you are fine with TensorFlow choosing a device for you (particularly, falling back to CPU when no GPU implementation is available), consider setting allow_soft_placement in your configuration, as per this article:

\n\n
sess = tf.Session(config=tf.ConfigProto(\n    allow_soft_placement=True, log_device_placement=True))\n
\n", + "system": "" + }, + { + "instruction": "Training TensorFlow for Predicting a Column in a csv file", + "input": "", + "output": "

The following reads from a CSV file and builds a tensorflow program. The example uses the Iris data set, since that maybe a more meaningful example. However, it should probably work for your data as well.

\n\n

Please note, the first column will be [0,1 or 2], since there are 3 species of iris.

\n\n
#!/usr/bin/env python\nimport tensorflow as tf\nimport numpy as np\nfrom numpy import genfromtxt\n\n# Build Example Data is CSV format, but use Iris data\nfrom sklearn import datasets\nfrom sklearn.cross_validation import train_test_split\nimport sklearn\ndef buildDataFromIris():\n    iris = datasets.load_iris()\n    X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.33, random_state=42)\n    f=open('cs-training.csv','w')\n    for i,j in enumerate(X_train):\n        k=np.append(np.array(y_train[i]),j   )\n        f.write(\",\".join([str(s) for s in k]) + '\\n')\n    f.close()\n    f=open('cs-testing.csv','w')\n    for i,j in enumerate(X_test):\n        k=np.append(np.array(y_test[i]),j   )\n        f.write(\",\".join([str(s) for s in k]) + '\\n')\n    f.close()\n\n\n# Convert to one hot\ndef convertOneHot(data):\n    y=np.array([int(i[0]) for i in data])\n    y_onehot=[0]*len(y)\n    for i,j in enumerate(y):\n        y_onehot[i]=[0]*(y.max() + 1)\n        y_onehot[i][j]=1\n    return (y,y_onehot)\n\n\nbuildDataFromIris()\n\n\ndata = genfromtxt('cs-training.csv',delimiter=',')  # Training data\ntest_data = genfromtxt('cs-testing.csv',delimiter=',')  # Test data\n\nx_train=np.array([ i[1::] for i in data])\ny_train,y_train_onehot = convertOneHot(data)\n\nx_test=np.array([ i[1::] for i in test_data])\ny_test,y_test_onehot = convertOneHot(test_data)\n\n\n#  A number of features, 4 in this example\n#  B = 3 species of Iris (setosa, virginica and versicolor)\nA=data.shape[1]-1 # Number of features, Note first is y\nB=len(y_train_onehot[0])\ntf_in = tf.placeholder(\"float\", [None, A]) # Features\ntf_weight = tf.Variable(tf.zeros([A,B]))\ntf_bias = tf.Variable(tf.zeros([B]))\ntf_softmax = tf.nn.softmax(tf.matmul(tf_in,tf_weight) + tf_bias)\n\n# Training via backpropagation\ntf_softmax_correct = tf.placeholder(\"float\", [None,B])\ntf_cross_entropy = -tf.reduce_sum(tf_softmax_correct*tf.log(tf_softmax))\n\n# Train using tf.train.GradientDescentOptimizer\ntf_train_step = tf.train.GradientDescentOptimizer(0.01).minimize(tf_cross_entropy)\n\n# Add accuracy checking nodes\ntf_correct_prediction = tf.equal(tf.argmax(tf_softmax,1), tf.argmax(tf_softmax_correct,1))\ntf_accuracy = tf.reduce_mean(tf.cast(tf_correct_prediction, \"float\"))\n\n# Initialize and run\ninit = tf.initialize_all_variables()\nsess = tf.Session()\nsess.run(init)\n\nprint(\"...\")\n# Run the training\nfor i in range(30):\n    sess.run(tf_train_step, feed_dict={tf_in: x_train, tf_softmax_correct: y_train_onehot})\n\n# Print accuracy\n    result = sess.run(tf_accuracy, feed_dict={tf_in: x_test, tf_softmax_correct: y_test_onehot})\n    print \"Run {},{}\".format(i,result)\n\n\n\"\"\"\nBelow is the ouput\n  ...\n  Run 0,0.319999992847\n  Run 1,0.300000011921\n  Run 2,0.379999995232\n  Run 3,0.319999992847\n  Run 4,0.300000011921\n  Run 5,0.699999988079\n  Run 6,0.680000007153\n  Run 7,0.699999988079\n  Run 8,0.680000007153\n  Run 9,0.699999988079\n  Run 10,0.680000007153\n  Run 11,0.680000007153\n  Run 12,0.540000021458\n  Run 13,0.419999986887\n  Run 14,0.680000007153\n  Run 15,0.699999988079\n  Run 16,0.680000007153\n  Run 17,0.699999988079\n  Run 18,0.680000007153\n  Run 19,0.699999988079\n  Run 20,0.699999988079\n  Run 21,0.699999988079\n  Run 22,0.699999988079\n  Run 23,0.699999988079\n  Run 24,0.680000007153\n  Run 25,0.699999988079\n  Run 26,1.0\n  Run 27,0.819999992847\n  ...\n\n Ref:\n https://gist.github.com/mchirico/bcc376fb336b73f24b29#file-tensorflowiriscsv-py\n\"\"\"\n
\n\n

I hope this helps.

\n", + "system": "" + }, + { + "instruction": "How to test tensorflow cifar10 cnn tutorial model?", + "input": "", + "output": "

This isn't 100% the answer to the question, but it's a similar way of solving it, based on a MNIST NN training example suggested in the comments to the question.

\n\n

Based on the TensorFlow begginer MNIST tutorial, and thanks to this tutorial, this is a way of training and using your Neural Network with custom data.

\n\n

Please note that similar should be done for tutorials such as the CIFAR10, as @Yaroslav Bulatov mentioned in the comments.

\n\n
import input_data\nimport datetime\nimport numpy as np\nimport tensorflow as tf\nimport cv2\nfrom matplotlib import pyplot as plt\nimport matplotlib.image as mpimg\nfrom random import randint\n\n\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)\n\nx = tf.placeholder(\"float\", [None, 784])\n\nW = tf.Variable(tf.zeros([784,10]))\nb = tf.Variable(tf.zeros([10]))\n\ny = tf.nn.softmax(tf.matmul(x,W) + b)\ny_ = tf.placeholder(\"float\", [None,10])\n\ncross_entropy = -tf.reduce_sum(y_*tf.log(y))\n\ntrain_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)\n\ninit = tf.initialize_all_variables()\n\nsess = tf.Session()\nsess.run(init)\n\n#Train our model\niter = 1000\nfor i in range(iter):\n  batch_xs, batch_ys = mnist.train.next_batch(100)\n  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n\n#Evaluationg our model:\ncorrect_prediction=tf.equal(tf.argmax(y,1), tf.argmax(y_,1))\n\naccuracy=tf.reduce_mean(tf.cast(correct_prediction,\"float\"))\nprint \"Accuracy: \", sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})\n\n#1: Using our model to classify a random MNIST image from the original test set:\nnum = randint(0, mnist.test.images.shape[0])\nimg = mnist.test.images[num]\n\nclassification = sess.run(tf.argmax(y, 1), feed_dict={x: [img]})\n'''\n#Uncomment this part if you want to plot the classified image.\nplt.imshow(img.reshape(28, 28), cmap=plt.cm.binary)\nplt.show()\n'''\nprint 'Neural Network predicted', classification[0]\nprint 'Real label is:', np.argmax(mnist.test.labels[num])\n\n\n#2: Using our model to classify MNIST digit from a custom image:\n\n# create an an array where we can store 1 picture\nimages = np.zeros((1,784))\n# and the correct values\ncorrect_vals = np.zeros((1,10))\n\n# read the image\ngray = cv2.imread(\"my_digit.png\", 0 ) #0=cv2.CV_LOAD_IMAGE_GRAYSCALE #must be .png!\n\n# rescale it\ngray = cv2.resize(255-gray, (28, 28))\n\n# save the processed images\ncv2.imwrite(\"my_grayscale_digit.png\", gray)\n\"\"\"\nall images in the training set have an range from 0-1\nand not from 0-255 so we divide our flatten images\n(a one dimensional vector with our 784 pixels)\nto use the same 0-1 based range\n\"\"\"\nflatten = gray.flatten() / 255.0\n\"\"\"\nwe need to store the flatten image and generate\nthe correct_vals array\ncorrect_val for a digit (9) would be\n[0,0,0,0,0,0,0,0,0,1]\n\"\"\"\nimages[0] = flatten\n\n\nmy_classification = sess.run(tf.argmax(y, 1), feed_dict={x: [images[0]]})\n\n\"\"\"\nwe want to run the prediction and the accuracy function\nusing our generated arrays (images and correct_vals)\n\"\"\"\nprint 'Neural Network predicted', my_classification[0], \"for your digit\"\n
\n\n

For further image conditioning (digits should be completely dark in a white background) and better NN training (accuracy>91%) please check the Advanced MNIST tutorial from TensorFlow or the 2nd tutorial i've mentioned.

\n", + "system": "" + }, + { + "instruction": "Error while importing Tensorflow in python2.7 in Red Hat release 6.6. 'GLIBC_2.17 not found'", + "input": "", + "output": "
\n

This is essentially a repeat of question asked here.

\n
\n\n

And the same answer should work. What stops you from downloading glibc-2.17 packages, unpacking them, and using the glibc-2.17/ld.so with correct --library-path?

\n\n
\n

Is it possible to install tensorflow locally, without upgrading OS.

\n
\n\n

Yes, using above answer.

\n", + "system": "" + }, + { + "instruction": "Conditional execution in TensorFlow", + "input": "", + "output": "

Hrm. It's possible that what you want is tf.control_flow_ops.cond()\nhttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/control_flow_ops.py#L597

\n\n

But that's not exported into the tf namespace, and I'm answering without checking how guaranteed-stable this interface is, but it's used in released models, so go for it. :)

\n\n

However: Because you actually know in advance what path you want when you construct the feed_dict, you could also take a different approach of invoking a separate path through your model. The standard way to do this is to, e.g., set up code like:

\n\n
def model(input, n_greater_than):\n  ... cleverness ...\n  if n_greater_than:\n     ... other cleverness...\n  return tf.reduce_mean(input)\n\n\nout1 = model(input, True)\nout2 = model(input, False)\n
\n\n

And then pull the out1 or out2 nodes depending upon what you know when you're about to run your computation and set the feed_dict. Remember that by default, if the model references the same variables (create them outside the model() func), then you'll basically have two separate paths through.

\n\n

You can see an example of this in the convolutional mnist example: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/mnist/convolutional.py#L165

\n\n

I'm a fan of doing it this way without introducing control flow dependencies if you can.

\n", + "system": "" + }, + { + "instruction": "conditional graph in tensorflow and for loop that accesses tensor size", + "input": "", + "output": "

I recommend you to write each question differently. Otherwise it will be closed as too broad.

\n\n

I can answer only your 3-rd question. How to programmatically get a shape of the tensor. You are correctly using shape to get the shape of the tensor, but you still can not get the results before you run the graph (look at my explanation here).

\n\n
a = tf.truncated_normal([2, 3], mean=0.0, stddev=0.02, dtype=tf.float32, seed=1)\nb = tf.shape(a)\nsess = tf.Session()\nprint sess.run(b) # will give you [2 3]\n
\n\n

The ugly way that I have found to get the shape from constants, without running the graph is to do something like (do not really know why would you need it):

\n\n
print a._shape._dims[0]._value\nprint a._shape._dims[1]._value\n
\n\n

To get the shape from a variable, you can do this:

\n\n
weights = tf.Variable(tf.random_normal([784, 200], stddev=0.35))\nprint weights.get_shape().as_list()\n
\n\n

Another way to access a shape of a Tensor before the evaluation is: tf.Tensor.get_shape()

\n", + "system": "" + }, + { + "instruction": "tensorflow: saving and restoring session", + "input": "", + "output": "

TL;DR: You should try to rework this class so that self.create_network() is called (i) only once, and (ii) before the tf.train.Saver() is constructed.

\n\n

There are two subtle issues here, which are due to the code structure, and the default behavior of the tf.train.Saver constructor. When you construct a saver with no arguments (as in your code), it collects the current set of variables in your program, and adds ops to the graph for saving and restoring them. In your code, when you call tflasso(), it will construct a saver, and there will be no variables (because create_network() has not yet been called). As a result, the checkpoint should be empty.

\n\n

The second issue is that—by default—the format of a saved checkpoint is a map from the name property of a variable to its current value. If you create two variables with the same name, they will be automatically \"uniquified\" by TensorFlow:

\n\n
v = tf.Variable(..., name=\"weights\")\nassert v.name == \"weights\"\nw = tf.Variable(..., name=\"weights\")\nassert v.name == \"weights_1\"  # The \"_1\" is added by TensorFlow.\n
\n\n

The consequence of this is that, when you call self.create_network() in the second call to tfl.fit(), the variables will all have different names from the names that are stored in the checkpoint—or would have been if the saver had been constructed after the network. (You can avoid this behavior by passing a name-Variable dictionary to the saver constructor, but this is usually quite awkward.)

\n\n

There are two main workarounds:

\n\n
    \n
  1. In each call to tflasso.fit(), create the whole model afresh, by defining a new tf.Graph, then in that graph building the network and creating a tf.train.Saver.

  2. \n
  3. RECOMMENDED Create the network, then the tf.train.Saver in the tflasso constructor, and reuse this graph on each call to tflasso.fit(). Note that you might need to do some more work to reorganize things (in particular, I'm not sure what you do with self.X and self.xlen) but it should be possible to achieve this with placeholders and feeding.

  4. \n
\n", + "system": "" + }, + { + "instruction": "How can I train a simple, non-linear regression model with tensor flow?", + "input": "", + "output": "

The InvalidArgumentError is due to the values that you are feeding (train_X and train_Y) not having the necessary shape to be multiplied by W1.

\n\n

There are a few issues here:

\n\n
    \n
  1. The statement mul = X * W1 should be mul = tf.matmul(X, W1), since * computes an elementwise multiplication, which is not what your equation specifies.

  2. \n
  3. The input data X should be a one-column matrix. To handle scalar and vector data - as you have in your feed calls, you could reshape it as follows:

    \n\n
    X = tf.placeholder(tf.float32)\nreshaped_X = tf.reshape(X, [-1, 1])\n# ...\nmul = reshaped_X * W1\n
  4. \n
  5. When you fetch the final cost, the first argument to sess.run should be l2_loss (and not cost):

    \n\n
    print(\"cost=\", sess.run(l2_loss, feed_dict={X: train_X, Y: train_Y}),\n      \"W1=\", sess.run(W1))\n
  6. \n
\n", + "system": "" + }, + { + "instruction": "Basic neural network in TensorFlow", + "input": "", + "output": "

The main reason that your network isn't training is that the statement:

\n\n
sess.run(train_op, feed_dict={X: trX, Y: trY})\n
\n\n

\u2026only executes once. In TensorFlow, running train_op (or whatever operation is returned from Optimizer.minimize() will only cause the network to take a single gradient descent step. You should execute it in a loop to perform iterative training, and the weights will eventually converge.

\n\n

Two other tips: (i) you might achieve faster convergence if you feed a subset of your training data in each step, rather than the entire dataset; and (ii) the learning rate of 0.5 is probably too high (although this depends on the data).

\n", + "system": "" + }, + { + "instruction": "Getting good mixing with many input datafiles in tensorflow", + "input": "", + "output": "

Yes - what you want is to use a combination of two things.\n(Note that this answer was written for TensorFlow v1, and some of the functionality has been replaced by the new tf.data pipelines; I've updated the answers to point to the v1 compat versions of things, but if you're coming to this answer for new code, please consult tf.data instead.)

\n

First, randomly shuffle the order in which you input your datafiles, by reading from them using a tf.train.string_input_producer with shuffle=True that feeds into whatever input method you use (if you can put your examples into tf.Example proto format, that's easy to use with parse_example). To be very clear, you put the list of filenames in the string_input_producer and then read them with another method such as read_file, etc.

\n

Second, you need to mix at a finer granularity. You can accomplish this by feeding the input examples into a tf.train.shuffle_batch node with a large capacity and large value of min_after_dequeue. One particularly nice way is to use a shuffle_batch_join that receives input from multiple files, so that you get a lot of mixing. Set the capacity of the batch big enough to mix well without exhausting your RAM. Tens of thousands of examples usually works pretty well.

\n

Keep in mind that the batch functions add a QueueRunner to the QUEUE_RUNNERS collection, so you need to run tf.train.start_queue_runners()

\n", + "system": "" + }, + { + "instruction": "How do I run a python script with Tensorflow running in a Docker on Windows?", + "input": "", + "output": "

If you want your container (that has Tensorflow already preinstalled, since it is running from the Tensorflow image) to access your script, you need to mount that script from your host onto a local path in your container.

\n\n
docker run -v /path/to/your/script:/path/to/script\n
\n\n

See \"Mount a host file as a data volume\".

\n\n
\n

The -v flag can also be used to mount a single file - instead of just directories - from the host machine.

\n
\n\n
$ docker run --rm -it -v ~/.bash_history:/.bash_history ubuntu /bin/bash\n
\n\n

Then, from your container, you will access the same script in /path/to/script.

\n\n

Alex Pryiomka gives an example of running such a script in tensorflow with \"How to run Python Scripts on Mac Terminal using Docker with Tensorflow?\"

\n", + "system": "" + }, + { + "instruction": "Using string labels in Tensorflow", + "input": "", + "output": "

The convert_to_records.py script creates a .tfrecords file in which each record is an Example protocol buffer. That protocol buffer supports string features using the bytes_list kind.

\n\n

The tf.decode_raw op is used to parse binary strings into image data; it is not designed to parse string (textual) labels. Assuming that features['label'] is a tf.string tensor, you can use the tf.string_to_number op to convert it to a number. There is limited other support for string processing inside your TensorFlow program, so if you need to perform some more complicated function to convert the string label to an integer, you should perform this conversion in Python in the modified version of convert_to_tensor.py.

\n", + "system": "" + }, + { + "instruction": "How to work with Tensorflow on Android platform?", + "input": "", + "output": "

The TensorFlow source repository includes an Android example application, with some documentation.

\n\n

The Android example includes a pre-trained model for image classification, and uses this to classify images captured by the camera. Typically you would build and train a model using the Python API; generate a serialised version of the model as a GraphDef protocol buffer (and possibly a checkpoint of the model parameters); and then load that and run inference steps using the C++ API.

\n", + "system": "" + }, + { + "instruction": "A reusable Tensorflow convolutional Network", + "input": "", + "output": "
tensorflow.python.framework.errors.InvalidArgumentError: Input has 14005248 values, which isn't divisible by 3136\n [[Node: Reshape_4 = Reshape[T=DT_FLOAT, _device=\"/job:localhost/replica:0/task:0/cpu:0\"](MaxPool_5, Reshape_4/shape)]]\n
\n\n

But the way you executed it prevents you from seeing the actual line causing the problem. Save it to a file and python <file> it.

\n\n
  File \"<stdin>\", line 1, in <module>\n
\n\n

But the answer is that you haven't changed the size of your convolutional and pooling layers, so when you used to run 28x28 images through, they eventually shrunk down to a 7x7x(convolutional_depth) layer. Now you're running huge images through, so after the first convolutional layer and the 2x2 maxpool, you've got a VERY BIG thing you're trying to feed in, but you're reshaping to:

\n\n
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)\nh_pool2 = max_pool_2x2(h_conv2)\n\nW_fc1 = weight_variable([7 * 7 * 64, 1024])\nb_fc1 = bias_variable([1024])\n
\n\n

The output of h_pool2 is much larger with your larger images. You need to shrink them down more - likely with more convolutional and maxpooling layers. You could also try increasing the size of W_fc1 to match the input size that's getting there. It's running through two 2x2 maxpools - each shrinks the size by 2 in the x and y dimensions. 28x28x1 --> 14x14x32 --> 7x7x64. So YOUR images are going from 388 x 191 --> 194 x 95 --> 97 x 47

\n\n

As a warning, a fully connected layer with 97*47 = 4559 inputs is going to be glacially slow.

\n", + "system": "" + }, + { + "instruction": "initialising Seq2seq embedding with pretrained word2vec", + "input": "", + "output": "

I think you've gotten your answer in the mailing list but I am putting it here for posterity.

\n\n

https://groups.google.com/a/tensorflow.org/forum/#!topic/discuss/bH6S98NpIJE

\n\n
\n

You can initialize it randomly and afterwards do:\n session.run(embedding.assign(my_word2vec_matrix))

\n \n

This will override the init values.

\n
\n\n

This seems to work for me. I believe trainable=False is needed to keep the values fixed?

\n\n
# load word2vec model (say from gensim)\nmodel = load_model(FILENAME, binary=True)\n\n# embedding matrix\nX = model.syn0\nprint(type(X)) # numpy.ndarray\nprint(X.shape) # (vocab_size, embedding_dim)\n\n# start interactive session\nsess = tf.InteractiveSession()\n\n# set embeddings\nembeddings = tf.Variable(tf.random_uniform(X.shape, minval=-0.1, maxval=0.1), trainable=False)\n\n# initialize\nsess.run(tf.initialize_all_variables())\n\n# override inits\nsess.run(embeddings.assign(X))\n
\n", + "system": "" + }, + { + "instruction": "matrix determinant differentiation in tensorflow", + "input": "", + "output": "

Please check \"Implement Gradient in Python\" section here

\n\n

In particular, you can implement it as follows

\n\n
@ops.RegisterGradient(\"MatrixDeterminant\")\ndef _MatrixDeterminantGrad(op, grad):\n  \"\"\"Gradient for MatrixDeterminant. Use formula from 2.2.4 from\n  An extended collection of matrix derivative results for forward and reverse\n  mode algorithmic differentiation by Mike Giles\n  -- http://eprints.maths.ox.ac.uk/1079/1/NA-08-01.pdf\n\"\"\"\n  A = op.inputs[0]\n  C = op.outputs[0]\n  Ainv = tf.matrix_inverse(A)\n  return grad*C*tf.transpose(Ainv)\n
\n\n

Then a simple training loop to check that it works:

\n\n
a0 = np.array([[1,2],[3,4]]).astype(np.float32)\na = tf.Variable(a0)\nb = tf.square(tf.matrix_determinant(a))\ninit_op = tf.initialize_all_variables()\nsess = tf.InteractiveSession()\ninit_op.run()\n\nminimization_steps = 50\nlearning_rate = 0.001\noptimizer = tf.train.GradientDescentOptimizer(learning_rate)\ntrain_op = optimizer.minimize(b)\n\nlosses = []\nfor i in range(minimization_steps):\n  train_op.run()\n  losses.append(b.eval())\n
\n\n

Then you can visualize your loss over time

\n\n
import matplotlib.pyplot as plt\n\nplt.ylabel(\"Determinant Squared\")\nplt.xlabel(\"Iterations\")\nplt.plot(losses)\n
\n\n

Should see something like this\n\"Loss

\n", + "system": "" + }, + { + "instruction": "TensorFlow - Text Classification using Neural Networks", + "input": "", + "output": "

I've started putting together a set of examples for text classification on DBPedia dataset (predicting class of object from its description) as part of examples for Scikit Flow:\nhttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/text_classification.py

\n\n

Going to expand this example and write a blog post when will have enough different models showcased. Feel free to suggest other datasets and models you would be interested to see.

\n", + "system": "" + }, + { + "instruction": "How do we deploy a trained tensorflow model on a mobile device?", + "input": "", + "output": "

The TensorFlow repository includes an example Android application that uses the mobile device camera as a data source, and the Inception image classification model for inference. The source can be found here, and the repository includes both the full source code and a link to download a trained model.

\n\n

The model is the Inception model that won Imagenet\u2019s Large Scale Visual Recognition Challenge in 2014.

\n", + "system": "" + }, + { + "instruction": "What is the format for device filters in TensorFlow?", + "input": "", + "output": "

The ConfigProto.device_filters field is currently ignored by TensorFlow, although it is intended to support your use case in future. If you want to achieve the same end of running ops on /gpu:1 and /cpu:0, you can do that as follows, using \"soft placement\":

\n\n
with tf.device(\"/gpu:1\"):\n  # Build your model in this with context. All nodes will get the\n  # device \"/gpu:1\".\n\nwith tf.Session(config=tf.ConfigProto(allow_soft_placement=True)):\n  # Execute your mode in this with context.\n  # Soft placement will use /gpu:1 for GPU-compatible ops, and /cpu:0\n  # for CPU-only ops.\n
\n", + "system": "" + }, + { + "instruction": "Convolutional neural networks and 3D images", + "input": "", + "output": "

TensorFlow now supports 3D convolution and 3D pooling in the master branch.

\n\n

You can use them with 5D tensors as input with shape: [batch_size, depth, height, width, channels].

\n", + "system": "" + }, + { + "instruction": "Sequence Labeling in TensorFlow", + "input": "", + "output": "

I suggest you start by reading the RNN tutorial and sequence-to-sequence tutorial. They explain how to build LSTMs in TensorFlow. Once you're comfortable with that, you'll have to find the right embedding Variable and assign it using your pre-trained word2vec model.

\n", + "system": "" + }, + { + "instruction": "Session.run() /Tensor.eval() of Tensorflow run for a crazy long time", + "input": "", + "output": "

1) As a basic sanity check: ls -al /Users/me/Downloads/cifar-10-batches-bin/data_batch_1.bin

\n\n

2) Don't forget to:

\n\n
init = tf.initialize_all_variables()\nsess.run(init)\n
\n\n

3) tf.train.start_queue_runners() (after creating your session)

\n\n

It's probably #3. The string_input_producer adds a queue runner to the QUEUE_RUNNERS collection, which needs to be started.

\n", + "system": "" + }, + { + "instruction": "How can I execute a TensorFlow graph from a protobuf in C++?", + "input": "", + "output": "

The issue here is that you are running the \"train_step\" target, which performs much more work than just inference. In particular, it attempts to update the variables W and b with the result of the gradient descent step. The error message

\n\n
Invalid argument: Input 0 of node train_step/update_W/ApplyGradientDescent was passed float from _recv_W_0:0 incompatible with expected float_ref.\n
\n\n

...means that one of the nodes you attempted to run (\"train_step/update_W/ApplyGradientDescent\") expected a mutable input (with type float_ref) but it got an immutable input (with type float) because the value was fed in.

\n\n

There are (at least) two possible solutions:

\n\n
    \n
  1. If you only want to see predictions for a given input and given weights, fetch \"softmax:0\" instead of \"train_step\" in the call to Session::Run().

  2. \n
  3. If you want to perform training in C++, do not feed W and b, but instead assign values to those variables, then continue to execute \"train_step\". You may find it easier to create a tf.train.Saver when you build the graph in Python, and then invoke the operations that it produces to save and restore values from a checkpoint.

  4. \n
\n", + "system": "" + }, + { + "instruction": "Use of matrix multiplication function in TensorFlow", + "input": "", + "output": "

I think you must be misreading the mechanics 101 tutorial - or could you point to the specific line?

\n\n

In general, for a network layer, I think of the inputs \"flowing through\" the weights. To represent that, I write tf.matmul(Inputs, Weights) to produce the output of that layer. That output may then have a bias b added to it, and the result of that fed into a nonlinear function such as a relu, and then into another tf.matmul as the input for the next layer.

\n\n

Second, remember that the Weights matrix may be sized to produce multiple outputs. That's why it's a matrix, not just a vector. For example, if you wanted two hidden units and you had five input features, you would use a shape [5, 2] weight matrix, like this (shown in numpy for ease of exposition - you can do the same thing in tensorflow):

\n\n
import numpy as np\na = np.array([1, 2, 3, 4, 5])\nW = np.array([[.5, .6], [.7, .8], [.9, .1], [.2, .3], [.4, .5]])\n\n>>> np.dot(a, W)\narray([ 7.4,  6.2])\n
\n\n

This has the nice behavior that if you then add a batch dimension to a, it still works:\n a = np.array[[1, 2, 3, 4, 5],\n [6, 7, 8, 9, 0]]

\n\n
>>> np.dot(a, W)\narray([[  7.4,   6.2],\n       [ 20.9,  17.7]])\n
\n\n

This is exactly what you're doing when you use tf.matmul to go from input features to hidden units, or from one layer of hidden units to another.

\n", + "system": "" + }, + { + "instruction": "How to directly write to summary which mimics scalar_summary?", + "input": "", + "output": "\n\n

Alternatively, if you want to generate a TensorBoard log in pure Python code, you can do the following:

\n\n
summary_writer = tf.train.SummaryWriter(log_dir)\nfor i in 10000:\n    value = 0.2 * i\n    summary = tf.Summary(value=[tf.Summary.Value(tag=tag, simple_value=value)])\n    summary_writer.add_summary(summary, global_step=i)\nsummary_writer.close()\n
\n", + "system": "" + }, + { + "instruction": "Merge string tensors in TensorFlow", + "input": "", + "output": "

Thanks to your question, we prioritized adding support for string concatenation in TensorFlow, and added it in this commit. String concatenation is implemented using the existing tf.add() operator, to match the behavior of NumPy's add operator (including broadcasting).

\n\n

To implement your example, you can write:

\n\n
complete = left + middle + right\n
\n\n

\u2026or, equivalently, but if you want to name the resulting tensor:

\n\n
complete = tf.add(tf.add(left, middle), right, name=\"COMPLETE\")\n
\n\n

We have not yet added support for strings in tf.add_n() (or related ops like tf.reduce_sum()) but will consider this if there are use cases for it.

\n\n

NOTE: To use this functionality immediately, you will need to build TensorFlow from source. The new op will be available in the next release of TensorFlow (0.7.0).

\n", + "system": "" + }, + { + "instruction": "skflow regression predict multiple values", + "input": "", + "output": "

I've just added support for multi-output regression into skflow since this #e443c734, so please reinstall the package are try again. If it doesn't work, please follow up on Github.

\n\n

I also added an example of multioutput regression to examples folder:

\n\n
# Create random dataset.\nrng = np.random.RandomState(1)\nX = np.sort(200 * rng.rand(100, 1) - 100, axis=0)\ny = np.array([np.pi * np.sin(X).ravel(), np.pi * np.cos(X).ravel()]).T\n\n# Fit regression DNN model.\nregressor = skflow.TensorFlowDNNRegressor(hidden_units=[5, 5])\nregressor.fit(X, y)\nscore = mean_squared_error(regressor.predict(X), y)\nprint(\"Mean Squared Error: {0:f}\".format(score))\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow not using GPU", + "input": "", + "output": "

The issue looks to be that when you bazel run the translation example, it rebuilds without GPU support. Try adding --config=cuda to the bazel run command, as follows:

\n\n
$ bazel run -c opt --config=cuda //tensorflow/models/rnn/translate:translate\n
\n\n

Without this option, Bazel will recompile the entire TensorFlow runtime without GPU support, and use this version when it runs the example application.

\n", + "system": "" + }, + { + "instruction": "TensorFlow - why doesn't this sofmax regression learn anything?", + "input": "", + "output": "

For starters, try initializing your W matrix with random values, not zeros - you're not giving the optimizer anything to work with when the output is all zeros for all inputs.

\n\n

Instead of:

\n\n
W = tf.Variable(tf.zeros([28*28, num_categories]))\n
\n\n

Try:

\n\n
W = tf.Variable(tf.truncated_normal([28*28, num_categories],\n                                    stddev=0.1))\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow - classification of videos", + "input": "", + "output": "

It would be possible to modify the image models shipped with TensorFlow to train a neural network on such video data.

\n\n

The main challenges would be:

\n\n
    \n
  1. Creating a custom reader that handled the appropriate video file format (or converting your video data to one of the supported format).

  2. \n
  3. Using the image ops to generate the multi-resolution images required in the Karpathy et al. model.

  4. \n
  5. (Possibly) using queues to perform pipelined video processing and ensure that I/O doesn't become a bottleneck.

  6. \n
\n", + "system": "" + }, + { + "instruction": "Does the output of tf.nn.bias_add(value, bias) ever have a different shape than shape of value", + "input": "", + "output": "

The shape of the result of tf.nn.bias_add(value, bias) is always the same as the shape of value, so these calls to tf.reshape() are unnecessary.

\n\n

Occasionally, calls to tf.reshape() are used to add explicit information about the shape, but the recommended way to do this, per the FAQ, is to use the Tensor.set_shape() method to add shape information without adding a redundant operation to the graph.

\n", + "system": "" + }, + { + "instruction": "TensorFlow initializing Tensor of ones", + "input": "", + "output": "

The way to solve your problem is to use tf.pack operation:

\n\n

o = tf.ones(shape=tf.pack([tf.shape(X)[0], 1]))

\n\n

The reason you had errors is that TensorFlow shape is expected to be a list of integers or a tensor link. tf.pack makes it easy to convert a list of integers and/or TensorFlow scalars into a Tensor object.

\n", + "system": "" + }, + { + "instruction": "TensorFlow cholesky decomposition", + "input": "", + "output": "

user19..8: The way to do this for now if you want to keep things \"mostly\" in tensorflow would be to do what you and Berci were discussing in the comments: Run the tensorflow graph until the point where you need to solve the linear system, and then feed the results back in with a feed_dict. In pseudocode:

\n\n
saved_tensor1 = tf.Variable(...)\nsaved_tensor2 = tf.Variable(...)\n\nstart_of_model...\ntensor1, tensor2 = various stuff...\ndo_save_tensor1 = saved_tensor1.assign(tensor1)\ndo_save_tensor2 = saved_tensor2.assign(tensor2)\nyour_cholesky = tf.cholesky(your_other_tensor, ...)\n\n## THIS IS THE SPLIT POINT\n# Second half of your model starts here\nsolved_system = tf.placeholder(...)  # You'll feed this in with feed_dict\nfinal_answer = do_something_with(saved_tensor1, saved_tensor2, solved_system)\n
\n\n

Then to run the whole thing, do:

\n\n
_, _, cho = tf.run([do_save_tensor1, do_save_tensor2, your_cholesky])\nsolution = ... solve your linear system with scipy ...\nfeed_dict = {solved_system: solution}\nanswer = tf.run(final_answer, feed_dict=feed_dict)\n
\n\n

The key here is stashing your intermediate results in tf.Variables so that you can resume the computation afterwards.

\n\n

(I'm not promising that what you get out of tf.cholesky is in the right format to feed directly to scipy, or that you shouldn't just pull out the matrix in an earlier step and feed it to scipy, but this overall workflow should work for you).

\n\n

Note that this will create a performance bottleneck if you're doing heavily multicore or GPU operations and then have to serialize on spitting the matrix out to scipy, but it might also be just fine - depends a lot on your setting.

\n", + "system": "" + }, + { + "instruction": "tensorflow loss minimization type error", + "input": "", + "output": "

Currently the tf.train.GradientDescentOptimizer class only supports training on 32-bit floating-point variables and loss values.

\n\n

However, it looks like the kernel is implemented for double-precision values, so it should be possible to train in your scenario.

\n\n

A quick workaround would be to define a subclass that supports tf.float64 values as well:

\n\n
class DoubleGDOptimizer(tf.train.GradientDescentOptimizer):\n  def _valid_dtypes(self):\n    return set([tf.float32, tf.float64])\n
\n\n

...and then use DoubleGDOptimizer in place of tf.train.GradientDescentOptimizer.

\n\n

EDIT: You'll need to pass in the learning rate as tf.constant(learning_rate, tf.float64) to make this work.

\n\n

(N.B. This isn't a supported interface and it may be subject to change in future, but the team is aware of the desire for optimizing double-precision floats, and intends to provide a built-in solution.)

\n", + "system": "" + }, + { + "instruction": "Cannot find any scalar summaries in TensorBoard", + "input": "", + "output": "

Try to use

\n\n

tensorboard --logdir=home/$USER/tensorflow/tensorflow/g3doc/tutorials/mnist/data

\n\n

or

\n\n

tensorboard --logdir=${PWD} in that directory

\n\n

Because tensorboard checks path existence by using os.path.exists()

\n\n

=

\n\n

Regarding that, I would like to set alias tensorboard='tensorboard --logdir=${PWD}' for convenient

\n", + "system": "" + }, + { + "instruction": "Port TensorFlow code to Android", + "input": "", + "output": "

The typical way to do this is to build (and train) your model using Python, save the GraphDef proto to a file using tf.train.write_graph(), and then write an app using the JNI to call the C++ TensorFlow API (see a complete example here).

\n\n

When you build your graph in Python, you should take note of the names of the tensors that will represent (i) the input data to be classified, and (ii) the predicted output values. Then you will be able to run a step by feeding a value for (i), and fetching the value for (ii).

\n\n

One final concern is how to represent the model parameters in your exported graph. There are several ways to do this, including shipping a TensorFlow checkpoint (written by a tf.train.Saver) as part of your app, and running the restore ops to reload it. One method, which has been used in the released InceptionV3 model is to rewrite the graph so that the model parameters are replaced with \"Const\" nodes, and the model graph becomes self contained.

\n", + "system": "" + }, + { + "instruction": "Variables on CPU, training/gradients on GPU", + "input": "", + "output": "

Indeed, in cifar10-train the activations and gradients are on GPU, only the parameters are on CPU. You are right that this is not optimal for single-GPU training due to the cost of copying parameters between CPU and GPU. I suspect the reason it is done this way is to have a single library for single-GPU and multi-GPU models, as in the multi-GPU case, it is probably faster to have parameters on CPU. You can test easily what speedup you can get by moving all variables to GPU, just remove the \"with tf.device('/cpu:0')\" in \"_variable_on_cpu\" in cifar10.py.

\n", + "system": "" + }, + { + "instruction": "Restricting number of cores used", + "input": "", + "output": "

use_per_session_threads will only affect the inter_op_parallelism_threads but not the intra_op_parallelism_threads. The intra_op_parallelism_threads will be used for the Eigen thread pool (see here) which is always global, thus subsequent sessions will not influence this anymore.

\n\n

Note that there are other TF functions which can also trigger the initialization of the Eigen thread pool, so it can happen that it's already initialized before you create the first tf.Session. One example is tensorflow.python.client.device_lib.list_local_devices().

\n\n

I solve this in a way that very early in my Python script, I create a dummy session with the appropriate values.

\n", + "system": "" + }, + { + "instruction": "Difference between local and dense layers in CNNs", + "input": "", + "output": "

Quoting from cuda-convnet:

\n\n
\n

Locally-connected layer with unshared-weight: This kind of layer is just like a convolutional layer, but without any weight-sharing. That is to say, a different set of filters is applied at every (x, y) location in the input image. Aside from that, it behaves exactly as a convolutional layer.

\n
\n\n

In the TensorFlow CIFAR-10 example, although the two layers are named local3 and local4, they are actually fully-connected layer, not locally-connected layer as specified in cuda-convnet (you can see that the output from pool2 is flattened into the input of local3 layer).

\n", + "system": "" + }, + { + "instruction": "Overriding device scope in Tensorflow", + "input": "", + "output": "

Mostly, with a few subtle points:

\n\n

(a) \"b\" and \"c\" could be computed in parallel, provided there are no control flow dependencies or data dependencies in what they're doing. But whether they actually are executed truly at the same time is unpredictable from this example. (I assume that was already obvious, but I wanted to be sure it was to others who might read this later.)

\n\n

Note also that as specified, b and c don't explicitly depend on a, so it's possible that all three of them could be executed concurrently. It is not the case that a must be executed first.

\n\n

(b) By default, if you don't supply any configuration options, device placement is \"soft\" -- the runtime can override things if the op can't be executed on the specific device. For example, a CPU-only op could be moved from a GPU back to /cpu:0; or an op pinned to /gpu:1 could be moved to /gpu:0 if the graph was run on a machine that had only a single GPU.

\n\n

You can control the hard-vs-soft placement by supplying a configuration to the tf.Session:

\n\n
with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)):\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow Tutorial: Duplicated Shuffling in the Input Pipeline", + "input": "", + "output": "

Yes - this is a common pattern, and it's shown in the most general way. The string_input_producer shuffles the order in which the data files are read. Each data file typically contains many examples, for efficiency. (Reading a million small files is very slow; it's better to read 1000 large files with 1000 examples each.)

\n\n

Therefore, the examples from the files are read into a shuffling queue, where they are shuffled at a much finer granularity, so that examples from the same file aren't always trained in the same order, and to get mixing across the input files.

\n\n

For more details, see Getting good mixing with many input datafiles in tensorflow

\n\n

If your files each contain only one input example, you don't need to shuffle multiple times and could get away with only a string_input_producer, but note that you still will likely benefit from having a queue that holds a few images after reading, so that you can overlap the inputting and training of your network. The queue_runner for a batch or shuffle_batch will run in a separate thread, ensuring that the I/O is happening in the background and that images are always available for training. And, of course, it's typically nice for speed to create minibatches to train on.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: Multi-GPU single input queue", + "input": "", + "output": "

You're correct that the code for the CIFAR-10 model uses multiple input queues (through multiple calls to cifar10.distorted_inputs() via cifar10.tower_loss()).

\n\n

The easiest way to use a shared queue between the GPUs would be to do the following:

\n\n
    \n
  1. Increase the batch size by a factor of N, where N is the number of GPUs.

  2. \n
  3. Move the call to cifar10.distorted_inputs() out of cifar10.tower_loss() and outside the loop over GPUs.

  4. \n
  5. Split the images and labels tensors that are returned from cifar10.distorted_inputs() along the 0th (batch) dimension:

    \n\n
    images, labels = cifar10.distorted_inputs()\nsplit_images = tf.split(0, FLAGS.num_gpus, images)\nsplit_labels = tf.split(0, FLAGS.num_gpus, labels)\n
  6. \n
  7. Modify cifar10.tower_loss() to take images and labels arguments, and invoke it as follows:

    \n\n
    for i in xrange(FLAGS.num_gpus):\n  with tf.device('/gpu:%d' % i):\n    with tf.name_scope('%s_%d' % (cifar10.TOWER_NAME, i)) as scope:\n\n      loss = tower_loss(scope, split_images[i], split_labels[i])\n
  8. \n
\n", + "system": "" + }, + { + "instruction": "How to keep calculated values in a Tensorflow graph (on the GPU)?", + "input": "", + "output": "

The answer is to store the value in a tf.Variable by storing to it using the assign operation:

\n\n

working code:

\n\n
import tensorflow as tf\nwith tf.Session() as s:\n    a = tf.Variable(tf.constant(1.),name=\"a\")\n    b = tf.Variable(tf.constant(2.),name=\"b\")\n    result = a + b\n    stored  = tf.Variable(tf.constant(0.),name=\"stored_sum\")\n    assign_op=stored.assign(result)\n    val,_ = s.run([result,assign_op],{a:1.,b:2.})\n    print(val) # 3\n    val=s.run(result,{a:4.,b:5.})\n    print(val[0]) # 9\n    print(stored.eval()) # ok, still 3 \n
\n", + "system": "" + }, + { + "instruction": "Incompatible shapes on tensorflow.equal() op for correct predictions evaluation", + "input": "", + "output": "

Well, after a lot debugging, I found that my issue was due to a bad instantiation of the labels. Instead of creating arrays full of zeros and replace one value by one, I created them with random value! Stupid mistake. In case someone wondering what I did wrong there and how I fix it here is the change I made.

\n\n

Anyway during all the debugging I made, to find this mistake, I found some useful information to debug this kind of problem:

\n\n
    \n
  1. For the cross entropy declaration, the tensorflow's MNIST tutorial use a formula that can lead to NaN value
  2. \n
\n\n

This formula is

\n\n
cross_entropy = -tf.reduce_sum(y_ * tf.log(y_conv))\n
\n\n

Instead of this, I found two ways to declare it in a safer fashion:

\n\n
cross_entropy = -tf.reduce_sum(y_ * tf.log(tf.clip_by_value(y_conv, 1e-10, 1.0)))\n
\n\n

or also:

\n\n
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logit, y_))\n
\n\n
    \n
  1. As mrry says. printing the shape of the tensors can help to detect shape anomaly.
  2. \n
\n\n

To get the shape of a tensor just call his get_shape() method like this:

\n\n
print \"W shape:\", W.get_shape()\n
\n\n
    \n
  1. user1111929 in this question use a debug print that help me assert where the problem come from.
  2. \n
\n", + "system": "" + }, + { + "instruction": "Create color histogram of an image using tensorflow", + "input": "", + "output": "

I would use tf.unsorted_segment_sum, where the \"segment IDs\" are computed from the color values and the thing you sum is a tf.ones vector. Note that tf.unsorted_segment_sum is probably better thought of as \"bucket sum\". It implements dest[segment] += thing_to_sum -- exactly the operation you need for a histogram.

\n\n

In slightly pseudocode (meaning I haven't run this):

\n\n
binned_values = tf.reshape(tf.floor(img_r * (NUM_BINS-1)), [-1])\nbinned_values = tf.cast(binned_values, tf.int32)\nones = tf.ones_like(binned_values, dtype=tf.int32)\ncounts = tf.unsorted_segment_sum(ones, binned_values, NUM_BINS)\n
\n\n

You could accomplish this in one pass instead of separating out the r, g, and b values with a split if you wanted to cleverly construct your \"ones\" to look like \"100100...\" for red, \"010010\" for green, etc., but I suspect it would be slower overall, and harder to read. I'd just do the split that you proposed above.

\n", + "system": "" + }, + { + "instruction": "Tensorflow Convolution Neural Net - Training with a small dataset, applying random changes to Images", + "input": "", + "output": "

One thing to start with: Instead of computing y_conv and then the cross-entropy, use the merged tf.softmax_cross_entropy_with_logits operator. This may not solve your problem, but it's more numerically stable than the naive version in the Red Pill example.

\n\n

Second, try printing out the cross_entropy at every iteration.

\n\n
cross_entropy = .... (previous code here)\ncross_entropy = tf.Print(cross_entropy, [cross_entropy], \"Cross-entropy: \")\n
\n\n

to get an idea if it's going to infinity as the model progresses, or if it just jumps to inf or NaN. If it progressively blows up, then it's probably the learning rate. If it jumps, it could be a numerical boundary condition that could be solved as above. If it's there from the get-go, you may have an error in the way you're applying distortions that ends up feeding in horribly broken data in some way.

\n", + "system": "" + }, + { + "instruction": "TensorFlow checkpoint save and read", + "input": "", + "output": "

There is a subtle issue in your code: each time you call the train() function, more nodes are added to the same TensorFlow graph, for all the model variables and the rest of the neural network. This means that each time you construct a tf.train.Saver(), it includes all of the variables for the previous calls to train(). Each time you recreate the model, the variables are created with an extra _N suffix to give them a unique name:

\n\n
    \n
  1. Saver constructed with variables var_a, var_b.
  2. \n
  3. Saver constructed with variables var_a, var_b, var_a_1, var_b_1.
  4. \n
  5. Saver constructed with variables var_a, var_b, var_a_1, var_b_1, var_a_2, var_b_2.
  6. \n
  7. etc.
  8. \n
\n\n

The default behavior for a tf.train.Saver is to associate each variable with the name of the corresponding op. This means that var_a_1 won't be initialized from var_a, because they end up with different names.

\n\n

The solution is to create a new graph each time you call train(). The simplest way to fix it is to change your main program to create a new graph for each call to train() as follows:

\n\n
# First train\nwith tf.Graph().as_default():\n    train(False, 1)\n\n# Following train\nfor i in xrange(10):\n    with tf.Graph().as_default():\n        train(True, 10)\n
\n\n

...or, equivalently, you could move the with block inside the train() function.

\n", + "system": "" + }, + { + "instruction": "Caching a computed value as a constant in TensorFlow", + "input": "", + "output": "

Perhaps counter-intuitively, the simplest way to re-use beta_hat as a constant in subsequent steps would be to assign it to a tf.Variable:

\n\n
n, k = 100, 5\nX = tf.placeholder(dtype=tf.float32, shape=[None, k])\ny = tf.placeholder(dtype=tf.float32, shape=[None, 1])\n\nbeta = np.random.normal(size=(k, ))\ndata_X = np.random.normal(size=(n, k))\n\ndata_y = data_X.dot(beta)\ndata_y += np.random.normal(size=data_y.shape) / 3.0\ndata_y = np.atleast_2d(data_y).T\n\n# Convert to 32-bit precision.\ndata_X, data_y = np.float32(data_X), np.float32(data_y)\n\n# Compute the least squares solution.\nbeta_hat = tf.matmul(\n    tf.matmul(tf.matrix_inverse(tf.matmul(tf.transpose(X), X)),\n              tf.transpose(X)), y\n)\n\nbeta_hat_cached = tf.Variable(beta_hat)\n\n# Launch the graph\nsess = tf.Session()\n\nprint \"True beta: {}\".format(beta)\n# Run the initializer, which computes `beta_hat` once:\nsess.run(beta_hat_cached.initializer, feed_dict={X: data_X, y: data_y})\n# To access the value of `beta_hat`, \"run\" the variable to read its contents.\nprint \"Est. beta: {}\".format(beta_hat_cached.ravel())\n\n# Use the cached version to compute predictions.\ny_pred = tf.matmul(X, beta_hat_cached)\nprint \"Predictions:\"\nprint sess.run(y_pred, feed_dict={X: data_X})\n
\n", + "system": "" + }, + { + "instruction": "Tensor flow install OSX", + "input": "", + "output": "

Yeah, as Hamed said above, it's better directly install the binaries.

\n\n

However, if you do need to build from the source, then these steps should work:

\n\n\n", + "system": "" + }, + { + "instruction": "TensorFlow's ReluGrad claims input is not finite", + "input": "", + "output": "

Error is due to 0log(0)

\n\n

This can be avoided by:

\n\n
cross_entropy = -tf.reduce_sum(y*tf.log(yconv+ 1e-9))\n
\n", + "system": "" + }, + { + "instruction": "ImportError: libcudart.so.7.0: cannot open shared object file: No such file or directory", + "input": "", + "output": "

This error is being raised because the loader cannot find version 7.0 of the CUDA runtime on your system. TensorFlow requires CUDA 7.0.

\n\n

From the path in your question (/usr/local/cuda-5.5/...) it looks like you have CUDA 5.5 installed. The solution is to upgrade your CUDA runtime to version 7.0, which is available from NVIDIA here.

\n", + "system": "" + }, + { + "instruction": "Would TensorFlow utilize GPU on a Mac if installed on a VM?", + "input": "", + "output": "

Probably not. VirtualBox, for example, does not support PCI Passthrough on a MacOS host, only a Linux host (and even then, I'd... uh, not get my hopes up). MacOS ends up so tightly integrated with its GPU(s) that I'd be very dubious that any VM can do it at this point.

\n", + "system": "" + }, + { + "instruction": "Why I don't have permissions to remove six while installing a pip package?", + "input": "", + "output": "

Ensure that your pip is python2.x for the location field:

\n\n
$ pip show pip\n---\nMetadata-Version: 2.0\nName: pip\nVersion: 7.1.2\nSummary: The PyPA recommended tool for installing Python packages.\nHome-page: https://pip.pypa.io/\nAuthor: The pip developers\nAuthor-email: python-virtualenv@groups.google.com\nLicense: MIT\nLocation: /usr/local/lib/python2.7/dist-packages\nRequires: \n
\n\n

Then:

\n\n
sudo pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow recommended system specifications?", + "input": "", + "output": "

The TensorFlow requirements are listed here, but these do not recommend a particular operating system or glibc version.

\n\n

The best-supported operating systems are Ubuntu 14.04 64-bit, and Mac OS X 10.10 (Yosemite) and later. The current limiting factor is the set of supported operating systems for Bazel, which we use to make the binary packages. You may be able to install Bazel from source, and then install TensorFlow from source, to get around these issues. Many users find it easier to install TensorFlow in a Docker container to avoid this problem.

\n", + "system": "" + }, + { + "instruction": "When using tensorboard, how to summarize a loss that is computed over several minibatches?", + "input": "", + "output": "

You could add a Variable that is updated on each sess.Run call and have the summary track the value of the Variable.

\n", + "system": "" + }, + { + "instruction": "Stable results with TensorFlow", + "input": "", + "output": "

The API method tf.set_random_seed() can be used to set a random seed that will be used in all TensorFlow random operations (including the usual random weight initializers and tf.RandomShuffleQueue).

\n", + "system": "" + }, + { + "instruction": "stopping gradient optimizer in TensorFlow", + "input": "", + "output": "

That's correct - the TensorFlow tf.train.Optimizer classes expose an operation that you can run to take one (gradient descent-style) step, but they do not monitor the current value of the cost or decide when to stop, so you may see increasing cost once the network begins to overfit.

\n", + "system": "" + }, + { + "instruction": "Initializing a shared variable with unknown shape", + "input": "", + "output": "

It seems based on the docs here\nthat the validate_shape parameter is now supported as desired, in case anyone else stumbles upon this question.

\n", + "system": "" + }, + { + "instruction": "Multivariate time-series RNN using Tensorflow. Is this possible with an LSTM cell or similar?", + "input": "", + "output": "

It depends on exactly what you mean, but yes, it should be possible. If you write more specifically how exactly your input and target data looks like, somebody may be able to help. You can generally have sequential continuous or categorical input data and sequential continuous or categorical output data or a mix of those. I would suggest you look at the tutorials and try out a few things, then ask again here.

\n", + "system": "" + }, + { + "instruction": "TensorFlow MLP not training XOR", + "input": "", + "output": "

In the meanwhile with the help of a colleague I were able to fix my solution and wanted to post it for completeness. My solution works with cross entropy and without altering the training data. Additionally it has the desired input shape of (1, 2) and ouput is scalar.

\n\n

It makes use of an AdamOptimizer which decreases the error much faster than a GradientDescentOptimizer. See this post for more information (& questions^^) about the optimizer.

\n\n

In fact, my network produces reasonably good results in only 400-800 learning steps.

\n\n

After 2000 learning steps the output is nearly \"perfect\":

\n\n
step: 2000\nloss: 0.00103311243281\n\ninput: [0.0, 0.0] | output: [[ 0.00019799]]\ninput: [0.0, 1.0] | output: [[ 0.99979786]]\ninput: [1.0, 0.0] | output: [[ 0.99996307]]\ninput: [1.0, 1.0] | output: [[ 0.00033751]]\n
\n\n
\n\n
import tensorflow as tf    \n\n#####################\n# preparation stuff #\n#####################\n\n# define input and output data\ninput_data = [[0., 0.], [0., 1.], [1., 0.], [1., 1.]]  # XOR input\noutput_data = [[0.], [1.], [1.], [0.]]  # XOR output\n\n# create a placeholder for the input\n# None indicates a variable batch size for the input\n# one input's dimension is [1, 2] and output's [1, 1]\nn_input = tf.placeholder(tf.float32, shape=[None, 2], name=\"n_input\")\nn_output = tf.placeholder(tf.float32, shape=[None, 1], name=\"n_output\")\n\n# number of neurons in the hidden layer\nhidden_nodes = 5\n\n\n################\n# hidden layer #\n################\n\n# hidden layer's bias neuron\nb_hidden = tf.Variable(tf.random_normal([hidden_nodes]), name=\"hidden_bias\")\n\n# hidden layer's weight matrix initialized with a uniform distribution\nW_hidden = tf.Variable(tf.random_normal([2, hidden_nodes]), name=\"hidden_weights\")\n\n# calc hidden layer's activation\nhidden = tf.sigmoid(tf.matmul(n_input, W_hidden) + b_hidden)\n\n\n################\n# output layer #\n################\n\nW_output = tf.Variable(tf.random_normal([hidden_nodes, 1]), name=\"output_weights\")  # output layer's weight matrix\noutput = tf.sigmoid(tf.matmul(hidden, W_output))  # calc output layer's activation\n\n\n############\n# learning #\n############\ncross_entropy = -(n_output * tf.log(output) + (1 - n_output) * tf.log(1 - output))\n# cross_entropy = tf.square(n_output - output)  # simpler, but also works\n\nloss = tf.reduce_mean(cross_entropy)  # mean the cross_entropy\noptimizer = tf.train.AdamOptimizer(0.01)  # take a gradient descent for optimizing with a \"stepsize\" of 0.1\ntrain = optimizer.minimize(loss)  # let the optimizer train\n\n\n####################\n# initialize graph #\n####################\ninit = tf.initialize_all_variables()\n\nsess = tf.Session()  # create the session and therefore the graph\nsess.run(init)  # initialize all variables  \n\n#####################\n# train the network #\n#####################\nfor epoch in xrange(0, 2001):\n    # run the training operation\n    cvalues = sess.run([train, loss, W_hidden, b_hidden, W_output],\n                       feed_dict={n_input: input_data, n_output: output_data})\n\n    # print some debug stuff\n    if epoch % 200 == 0:\n        print(\"\")\n        print(\"step: {:>3}\".format(epoch))\n        print(\"loss: {}\".format(cvalues[1]))\n        # print(\"b_hidden: {}\".format(cvalues[3]))\n        # print(\"W_hidden: {}\".format(cvalues[2]))\n        # print(\"W_output: {}\".format(cvalues[4]))\n\n\nprint(\"\")\nprint(\"input: {} | output: {}\".format(input_data[0], sess.run(output, feed_dict={n_input: [input_data[0]]})))\nprint(\"input: {} | output: {}\".format(input_data[1], sess.run(output, feed_dict={n_input: [input_data[1]]})))\nprint(\"input: {} | output: {}\".format(input_data[2], sess.run(output, feed_dict={n_input: [input_data[2]]})))\nprint(\"input: {} | output: {}\".format(input_data[3], sess.run(output, feed_dict={n_input: [input_data[3]]})))\n
\n", + "system": "" + }, + { + "instruction": "View Tensorboard via Chorme on Windows", + "input": "", + "output": "

I am having a similar problem. \nView Tensorboard on Docker on Google Cloud

\n\n

I used:

\n\n

docker run -p 127.0.0.1:$HOSTPORT:$6006 --name ai-unicorn -t XXXXXXXXXXXX

\n\n

(The image number is replaced by X... here)

\n", + "system": "" + }, + { + "instruction": "Cholesky factor differentiation in TensorFlow", + "input": "", + "output": "

We discussed this a bit in the answers and comments to this question: TensorFlow cholesky decomposition.\nIt might (?) be possible to port the Theano implementation of CholeskyGrad, provided its semantics are actually what you want. Theano's is based upon Smith's \"Differentiation of the Cholesky Algorithm\".

\n\n

If you implement it as a C++ operation that the Python just calls into, you have unrestricted access to all the looping constructs you could desire, and anything Eigen provides. If you wanted to do it in pure tensorflow, you could use the control flow ops, such as tf.control_flow_ops.While to loop.

\n\n

Once you know the actual formula you want to apply, the answer here: matrix determinant differentiation in tensorflow\nshows how to implement and register a gradient for an op in tensorflow.

\n\n

You could also create an issue on github to request this feature, though, of course, you'll probably get it faster if you implement it yourself and then send in a pull request. :)

\n", + "system": "" + }, + { + "instruction": "Tensorflow translate.py import error: No module named translate", + "input": "", + "output": "

The best way to do this is to navigate to folder containing the translate module and running it. You can also download the translate module to any other place and run it. However, don't forget to change the above lines to:

\n\n
from translate import data_utils\nfrom translate import seq2seq_model\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow dependencies needed. How to run TensorFlow on Windows", + "input": "", + "output": "

There are now three main options for building and/or running TensorFlow on Windows:

\n\n\n", + "system": "" + }, + { + "instruction": "writing a custom cost function in tensorflow", + "input": "", + "output": "

At present, tensorflow can't gather on axes other than the first - it's requested.

\n\n

But for what you want to do in this specific situation, you can transpose, then gather 0,2,4, and then transpose back. It won't be crazy fast, but it works:

\n\n
tf.transpose(tf.gather(tf.transpose(y), [0,2,4]))\n
\n\n

This is a useful workaround for some of the limitations in the current implementation of gather.

\n\n

(But it is also correct that you can't use a numpy slice on a tensorflow node - you can run it and slice the output, and also that you need to initialize those variables before you run. :). You're mixing tf and np in a way that doesn't work.

\n\n
x = tf.Something(...)\n
\n\n

is a tensorflow graph object. Numpy has no idea how to cope with such objects.

\n\n
foo = tf.run(x)\n
\n\n

is back to an object python can handle.

\n\n

You typically want to keep your loss calculation in pure tensorflow, so do the cross and other functions in tf. You'll probably have to do the arccos the long way, as tf doesn't have a function for it.

\n", + "system": "" + }, + { + "instruction": "How to run tensor flow seq2seq demo", + "input": "", + "output": "

There are two ways to run the script:

\n\n

1) separate the script arguments with -- as part of bazel run

\n\n
bazel run -c opt //tensorflow/models/rnn/translate:translate -- \\\n--data_dir ./data_dir --train_dir ./checkpoints_directory \\\n--en_vocab_size=40000 --fr_vocab_size=40000\n
\n\n

2) build and then run from ./bazel-bin/:

\n\n
bazel build -c opt //tensorflow/models/rnn/translate:translate\n\n./bazel-bin/tensorflow/models/rnn/translate/translate \\\n--data_dir ./data_dir --train_dir ./checkpoints_directory \\\n--en_vocab_size=40000 --fr_vocab_size=40000\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow MLP not training XOR", + "input": "", + "output": "

In the meanwhile with the help of a colleague I were able to fix my solution and wanted to post it for completeness. My solution works with cross entropy and without altering the training data. Additionally it has the desired input shape of (1, 2) and ouput is scalar.

\n\n

It makes use of an AdamOptimizer which decreases the error much faster than a GradientDescentOptimizer. See this post for more information (& questions^^) about the optimizer.

\n\n

In fact, my network produces reasonably good results in only 400-800 learning steps.

\n\n

After 2000 learning steps the output is nearly \"perfect\":

\n\n
step: 2000\nloss: 0.00103311243281\n\ninput: [0.0, 0.0] | output: [[ 0.00019799]]\ninput: [0.0, 1.0] | output: [[ 0.99979786]]\ninput: [1.0, 0.0] | output: [[ 0.99996307]]\ninput: [1.0, 1.0] | output: [[ 0.00033751]]\n
\n\n
\n\n
import tensorflow as tf    \n\n#####################\n# preparation stuff #\n#####################\n\n# define input and output data\ninput_data = [[0., 0.], [0., 1.], [1., 0.], [1., 1.]]  # XOR input\noutput_data = [[0.], [1.], [1.], [0.]]  # XOR output\n\n# create a placeholder for the input\n# None indicates a variable batch size for the input\n# one input's dimension is [1, 2] and output's [1, 1]\nn_input = tf.placeholder(tf.float32, shape=[None, 2], name=\"n_input\")\nn_output = tf.placeholder(tf.float32, shape=[None, 1], name=\"n_output\")\n\n# number of neurons in the hidden layer\nhidden_nodes = 5\n\n\n################\n# hidden layer #\n################\n\n# hidden layer's bias neuron\nb_hidden = tf.Variable(tf.random_normal([hidden_nodes]), name=\"hidden_bias\")\n\n# hidden layer's weight matrix initialized with a uniform distribution\nW_hidden = tf.Variable(tf.random_normal([2, hidden_nodes]), name=\"hidden_weights\")\n\n# calc hidden layer's activation\nhidden = tf.sigmoid(tf.matmul(n_input, W_hidden) + b_hidden)\n\n\n################\n# output layer #\n################\n\nW_output = tf.Variable(tf.random_normal([hidden_nodes, 1]), name=\"output_weights\")  # output layer's weight matrix\noutput = tf.sigmoid(tf.matmul(hidden, W_output))  # calc output layer's activation\n\n\n############\n# learning #\n############\ncross_entropy = -(n_output * tf.log(output) + (1 - n_output) * tf.log(1 - output))\n# cross_entropy = tf.square(n_output - output)  # simpler, but also works\n\nloss = tf.reduce_mean(cross_entropy)  # mean the cross_entropy\noptimizer = tf.train.AdamOptimizer(0.01)  # take a gradient descent for optimizing with a \"stepsize\" of 0.1\ntrain = optimizer.minimize(loss)  # let the optimizer train\n\n\n####################\n# initialize graph #\n####################\ninit = tf.initialize_all_variables()\n\nsess = tf.Session()  # create the session and therefore the graph\nsess.run(init)  # initialize all variables  \n\n#####################\n# train the network #\n#####################\nfor epoch in xrange(0, 2001):\n    # run the training operation\n    cvalues = sess.run([train, loss, W_hidden, b_hidden, W_output],\n                       feed_dict={n_input: input_data, n_output: output_data})\n\n    # print some debug stuff\n    if epoch % 200 == 0:\n        print(\"\")\n        print(\"step: {:>3}\".format(epoch))\n        print(\"loss: {}\".format(cvalues[1]))\n        # print(\"b_hidden: {}\".format(cvalues[3]))\n        # print(\"W_hidden: {}\".format(cvalues[2]))\n        # print(\"W_output: {}\".format(cvalues[4]))\n\n\nprint(\"\")\nprint(\"input: {} | output: {}\".format(input_data[0], sess.run(output, feed_dict={n_input: [input_data[0]]})))\nprint(\"input: {} | output: {}\".format(input_data[1], sess.run(output, feed_dict={n_input: [input_data[1]]})))\nprint(\"input: {} | output: {}\".format(input_data[2], sess.run(output, feed_dict={n_input: [input_data[2]]})))\nprint(\"input: {} | output: {}\".format(input_data[3], sess.run(output, feed_dict={n_input: [input_data[3]]})))\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow classification with extremely unbalanced dataset", + "input": "", + "output": "

You could try changing the cost function so that a false positive output would be penalized more heavily than a false negative.

\n", + "system": "" + }, + { + "instruction": "How to to find location (ROI) of a recognized object in tensorflow", + "input": "", + "output": "

Unfortunately that placeholder is not populated, and actually determining the region of interest of an object is still a hard research problem. We don't currently have a released example of a model that can do that in TensorFlow, but if you want to look through the research, the term to look for is 'localization'.

\n\n

There's a related question you might find helpful here:

\n\n

Does Convolutional Neural Network possess localization abilities on images?

\n", + "system": "" + }, + { + "instruction": "TensorFlow LSTM Generative Model", + "input": "", + "output": "

The issue here seems to be the m.input_data: x mapping in the feed_dict passed session.run(). In this case, TensorFlow expects that x is a numpy array (or some object that can be implicitly converted to a numpy array), but the value is a TensorFlow Tensor (the result of tf.zeros_like()).

\n\n

Fortunately, the solution is simple. Replace x = tf.zeros_like(m.input_data) with the following:

\n\n
x = tf.zeros_like(m.input_data).eval()\n
\n\n

...which ensures that x is converted to a numpy array.

\n\n

(Note that a more direct way to achieve this would be to construct the initial x as a numpy array of the appropriate size.)

\n", + "system": "" + }, + { + "instruction": "What is a tensorflow float ref?", + "input": "", + "output": "

FYI.\nI got a similar error and mine was:

\n\n
\n

node GradientDescent/update_input/ApplyGradientDescent was passed float from _arg_input_0_1:0 incompatible with expected float_ref.

\n
\n\n

This happened because somewhere in my node-tree I had a tf.Variable instead of a t.fplaceholder. After replacing the variable with the placeholder, it worked.

\n", + "system": "" + }, + { + "instruction": "Tensorflow multivariate linear regression not converging", + "input": "", + "output": "

it seems indeed a problem with the learning rate: 0.03 may be too high depending on how does your data look like. Also, you probably want to create your graph separated from the session in a more explicit way, or even use the normal equations to reach the optimal solution without having to iterate, if your dataset has a mid/low dimensionality. Here I posted some examples that you may hopefully find helpful! Also, the TF tutorials cover it well (search for \"Complete program\" in that page).

\n\n

But regarding your code, here is a version that worked for me: I changed some deprecated functions, and basically set the learning rate to the much lower value alpha=1e-8, which (on the synthetic dataset also generated in the code) seems to converge:

\n\n
test accuracy 2176.11\ntest accuracy 1898.6\ntest accuracy 1663.69\ntest accuracy 1458.53\ntest accuracy 1287.57\ntest accuracy 1116.9\ntest accuracy 969.474\ntest accuracy 841.028\ntest accuracy 738.592\ntest accuracy 649.891\ntest accuracy 565.188\ntest accuracy 495.33\ntest accuracy 438.351\ntest accuracy 381.161\ntest accuracy 333.213\ntest accuracy 289.575\ntest accuracy 254.394\ntest accuracy 222.836\ntest accuracy 197.36\ntest accuracy 172.788\ntest accuracy 152.251\ntest accuracy 132.664\ntest accuracy 115.982\ntest accuracy 101.021\nfinal test accuracy 90.2555\n
\n\n
\n\n

CODE:

\n\n
import tensorflow as tf\nimport numpy as np\n\n\n# generate some dataset\nDIMENSIONS = 5\nDS_SIZE = 5000\nTRAIN_RATIO = 0.5 # 50% of the dataset isused for training\n_train_size = int(DS_SIZE*TRAIN_RATIO)\n_test_size = DS_SIZE - _train_size\nf = lambda(x): sum(x) # the \"true\" function: f = 0 + 1*x1 + 1*x2 + 1*x3 ...\nnoise = lambda: np.random.normal(0,10) # some noise\n# training globals\nLAMBDA = 1e6 # L2 regularization factor\n# generate the dataset, the labels and split into train/test\nds = [[np.random.rand()*1000 for d in range(DIMENSIONS)] for _ in range(DS_SIZE)]\nds = [([1]+x, [f(x)+noise()]) for x in ds] # add x[0]=1 dimension and labels\nnp.random.shuffle(ds)\ntrain_data, train_labels = zip(*ds[0:_train_size])\ntest_data, test_labels = zip(*ds[_train_size:])\n\n\n\ndef normalize_data(matrix):\n    averages = np.average(matrix,0)\n    mins = np.min(matrix,0)\n    maxes = np.max(matrix,0)\n    ranges = maxes - mins\n    return ((matrix - averages)/ranges)\n\ndef run_regression(X, Y, X_test, Y_test, lambda_value = 0.1, normalize=False, batch_size=10, alpha=1e-8):\n    x_train = normalize_data(X) if normalize else X\n    y_train = Y\n    x_test = X_test\n    y_test = Y_test\n    session = tf.Session()\n    # Calculate number of features for X and Y\n    x_features_length = len(X[0])\n    y_features_length = len(Y[0])\n    # Build Tensorflow graph parts\n    x = tf.placeholder('float', [None, x_features_length], name=\"X\")\n    y = tf.placeholder('float', [None, y_features_length], name=\"Y\")\n    theta = tf.Variable(tf.random_normal([x_features_length, y_features_length], stddev=0.01), name=\"Theta\")\n    lambda_val = tf.constant(lambda_value)\n    # Trying to implement this way http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=MachineLearning&doc=exercises/ex5/ex5.html\n    y_predicted = tf.matmul(x, theta, name=\"y_predicted\")\n    #regularization_cost_part = tf.cast(tf.multiply(lambda_val,tf.reduce_sum(tf.pow(theta,2)), name=\"regularization_param\"), 'float')\n    #polynomial_cost_part = tf.reduce_sum(tf.pow(tf.subtract(y_predicted, y), 2), name=\"polynomial_sum\")\n    # Set up some summary info to debug\n    with tf.name_scope('cost') as scope:\n        #cost_func = tf.multiply(tf.cast(1/(2*batch_size), 'float'), tf.cast(tf.add(polynomial_cost_part, regularization_cost_part), 'float'))\n        cost_func = (tf.nn.l2_loss(y_predicted - y)+lambda_val*tf.nn.l2_loss(theta))/float(batch_size)\n        #DEPRECATED*** cost_summary = tf.scalar_summary(\"cost\", cost_func)\n        cost_summary = tf.summary.scalar('cost', cost_func)# Add a scalar summary for the snapshot loss.\n    training_func = tf.train.GradientDescentOptimizer(alpha).minimize(cost_func)\n    with tf.name_scope(\"test\") as scope:\n        correct_prediction = tf.subtract(tf.cast(1, 'float'), tf.reduce_mean(tf.subtract(y_predicted, y)))\n        accuracy = tf.cast(correct_prediction, \"float\")\n        #DEPRECATED*** accuracy_summary = tf.scalar_summary(\"accuracy\", accuracy)\n        #accuracy_summary = tf.summary.scalar(\"accuracy\", accuracy)\n    saver = tf.train.Saver()\n    #DEPRECATED*** merged = tf.merge_all_summaries()\n    merged = tf.summary.merge_all()\n    #DEPRECATED*** writer = tf.train.SummaryWriter(\"/tmp/football_logs\", session.graph_def)\n    writer = tf.summary.FileWriter(\"/tmp/football_logs\", session.graph)\n    #DEPRECATED*** init = tf.initialize_all_variables()\n    init = tf.global_variables_initializer()\n    session.run(init)\n    for i in range(1, (len(x_train)/batch_size)):\n        session.run(training_func, feed_dict={x: x_train[i*batch_size:i*batch_size+batch_size], y: y_train[i*batch_size:i*batch_size+batch_size]})\n        if i % batch_size == 0:\n            print \"test accuracy %g\"%session.run(accuracy, feed_dict={x: x_test, y: y_test})\n            #result = session.run([merged, accuracy], feed_dict={x: x_test, y: y_test})\n            # writer.add_summary(result[0], i)\n            # print \"step %d, training accuracy %g\"%(i, result[1])\n            #writer.flush()\n    print \"final test accuracy %g\"%session.run(accuracy, feed_dict={x: x_test, y: y_test})\n    # save_path = saver.save(session, \"/tmp/football.ckpt\")\n    # print \"Model saved in file: \", save_path\n    session.close()\n\nrun_regression(train_data, train_labels, test_data, test_labels, normalize=False, alpha=1e-8)\n
\n\n

As I said, you will probably want to change the structure to favor readability and scalability, but hopefully this helps!

\n\n

Cheers,\nAndres

\n", + "system": "" + }, + { + "instruction": "No module named tensorflow.python.platform", + "input": "", + "output": "

This helped me.

\n
PYTHONPATH="${PYTHONPATH}:/usr/local/lib/python2.7/dist-packages/"\nexport PYTHONPATH\n\nMake sure to check your path with this: echo $PYTHONPATH\n
\n", + "system": "" + }, + { + "instruction": "How do I define my own operators in TensorFlow", + "input": "", + "output": "

As the comment suggested, there is a how-to guide for adding an op to TensorFlow. This guide covers adding a new op that is implemented in C++. In general, you should do this in the following situations:

\n\n\n", + "system": "" + }, + { + "instruction": "How to filter tensor from queue based on some predicate in tensorflow?", + "input": "", + "output": "

The most straightforward way to do this is to dequeue a batch, run them through the predicate test, use tf.where to produce a dense vector of the ones that match the predicate, and use tf.gather to collect the results, and enqueue that batch. If you want that to happen automatically, you can start a queue runner on the second queue - the easiest way to do that is to use tf.train.batch:

\n\n

Example:

\n\n
import numpy as np\nimport tensorflow as tf\n\na = tf.constant(np.array([5, 1, 9, 4, 7, 0], dtype=np.int32))\n\nq = tf.FIFOQueue(6, dtypes=[tf.int32], shapes=[])\nenqueue = q.enqueue_many([a])\ndequeue = q.dequeue_many(6)\npredmatch = tf.less(dequeue, [5])\nselected_items = tf.reshape(tf.where(predmatch), [-1])\nfound = tf.gather(dequeue, selected_items)\n\nsecondqueue = tf.FIFOQueue(6, dtypes=[tf.int32], shapes=[])\nenqueue2 = secondqueue.enqueue_many([found])\ndequeue2 = secondqueue.dequeue_many(3) # XXX, hardcoded\n\nwith tf.Session() as sess:\n  sess.run(tf.global_variables_initializer())\n  sess.run(enqueue)  # Fill the first queue\n  sess.run(enqueue2) # Filter, push into queue 2\n  print sess.run(dequeue2) # Pop items off of queue2\n
\n\n

The predicate produces a boolean vector; the tf.where produces a dense vector of the indexes of the true values, and the tf.gather collects items from your original tensor based upon those indexes.

\n\n

A lot of things are hardcoded in this example that you'd need to make not-hardcoded in reality, of course, but hopefully it shows the structure of what you're trying to do (create a filtering pipeline). In practice, you'd want QueueRunners on there to keep things churning automatically. Using tf.train.batch is very useful to handle that automatically -- see Threading and Queues for more detail.

\n", + "system": "" + }, + { + "instruction": "TensorFlow core debug; missing debug symbols", + "input": "", + "output": "

TensorFlow loads a library called _pywrap_tensorflow.so that includes its C API (as defined in tensorflow/tensorflow/core/client/tensor_c_api.cc ).

\n\n

In my case the library loaded during runtime was located in
\n~/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so
\nbut the library that was built from the local source code was located in ~/.cache/bazel/_bazel_<username>/dbb3c677efbf9967e464a5c6a1e69337/tensorflow/bazel-out/local_linux-dbg/bin/tensorflow/python/_pywrap_tensorflow.so.

\n\n

Copying the locally built library over the loaded library, and attaching to the python process as defined in the question solved the problem.

\n", + "system": "" + }, + { + "instruction": "Understanding Variable scope example in Tensorflow", + "input": "", + "output": "

One has to create the variable set only once per whole training (and testing) set. The goal of variable scopes is to allow for modularization of subsets of parameters, such as those belonging to layers (e.g. when architecture of a layer is repeated, the same names can be used within each layer scope).

\n\n

In your example you create parameters only in the model function. You can print out your variable names to see that it is assigned to the specified scope:

\n\n
from __future__ import print_function\n\nX = tf.placeholder(\"float\") # create symbolic variables\nY = tf.placeholder(\"float\")\nprint(\"X:\", X.name)\nprint(\"Y:\", Y.name)\n\ndef model(X):\n    with tf.variable_scope(\"param\"):\n        w = tf.Variable(0.0, name=\"weights\") # create a shared variable (like theano.shared) for the weight matrix\n    print(\"w:\", w.name)\n    return tf.mul(X, w) \n
\n\n

The call to sess.run(train_op, feed_dict={X: x, Y: y}) only evaluates the value of train_op given the provided values of X and Y. No new variables (incl. parameters) are created there; therefore, it has no effect. You can make sure the variable names stay the same by again printing them out:

\n\n
with tf.variable_scope(\"train\"):\n    print(\"X:\", X.name)\n    print(\"Y:\", Y.name)\n    for i in range(100):\n        for (x, y) in zip(trX, trY):\n            sess.run(train_op, feed_dict={X: x, Y: y})\n
\n\n

You will see that variable names stay the same, as they are already initialized.

\n\n

If you'd like to retrieve a variable using its scope, you need to use get_variable within a tf.variable_scope enclosure:

\n\n
with tf.variable_scope(\"param\"):\n    w = tf.get_variable(\"weights\", [1])\nprint(\"w:\", w.name)\n
\n", + "system": "" + }, + { + "instruction": "Tensorboard doesn't display the graph (HTML error)", + "input": "", + "output": "

Tensorboard is only working in Google Chrome as far as I can tell. I tried Safari, Firefox and Chrome on my Mac and it only showed the graph properly in Chrome.

\n", + "system": "" + }, + { + "instruction": "Transfer parameters from training to inference graph", + "input": "", + "output": "

When you construct your inference graph, you should be able to construct a tf.train.Saver() with no arguments, and it will construct the appropriate save and restore ops for you. You should then be able to call saver.restore(sess, filename) to restore the variables from a file.

\n\n

N.B. For the constructor to work with no arguments, (i) the variables in the inference graph (i.e. the result of tf.all_variables()) must be a subset of the variables in the training graph, and (ii) the corresponding variables must have exactly the same names. If either of these conditions doesn't hold, you will need to specify a variable name map to the saver constructor. (However, if self.CreateTrainingGraph() calls self.CreateInferenceGraph() before creating any other variables, and doesn't do anything different with tf.name_scope(), then this should be fine.)

\n\n

(The saver_def argument is infrequently used when you load in a graph—for example using tf.import_graph_def()—that already contains the save and restore ops from a previously created Saver. It will then create a Saver in your Python program that reuses those ops, and you will get a mysterious error if the graph does not contain those ops.)

\n", + "system": "" + }, + { + "instruction": "convert Caffe train.txt to Tensorflow", + "input": "", + "output": "

I'm assuming that you want to obtain a batch of identically-sized images with numeric labels. We'll use tf.decode_csv() to parse the text, tf.read_file() to load the JPEG data as a string, tf.image.decode_jpeg() to parse it into a dense tensor, and finally tf.train.batch() to build the parsed data into a batch of images. Many of these functions have a lot of options to configure, so see the documentation for further customization details.

\n\n
# Set options here for whether to repeat, etc.\nfilename_producer = tf.string_input_producer([\"train.txt\"], ...)\n\n# Read lines from the file, one at a time.\nline_reader = tf.TextLineReader()\nnext_line = line_reader.read(filename_producer)\n\n# Parse line into a filename and an integer label.\nimage_filename, label = tf.decode_csv(\n    next_line, [tf.constant([], dtype=tf.string), tf.constant([], dtype=tf.int32)],\n    field_delim=\" \")\n\n# Read the image as a string.\nimage_bytes = tf.read_file(image_filename)\n\n# Convert the image into a 3-D tensor (height x width x channels).\nimage_tensor = tf.image.decode_jpeg(image_bytes, ...)\n\n# OPTIONAL: Resize your image to a standard size if they are not already.\nHEIGHT = ...\nWIDTH = ...\nimage_tensor = tf.image.resize_image_with_crop_or_pad(image_tensor, HEIGHT, WIDTH)\n\n# Create a batch of images.\nBATCH_SIZE = 32\nimages, labels = tf.train.batch([image_tensor, label], BATCH_SIZE, ...)\n\n# [...build the rest of your model...]\n
\n\n

This example makes extensive use of TensorFlow prefetching to load the examples. The TensorFlow documentation has a how-to that explains how to use the prefetching feature, but the most important thing to note is that you must call tf.train.start_queue_runners() at the start of your session to begin prefetching:

\n\n
sess = tf.Session()\n\n# You must execute this statement to begin prefetching data.\ntf.train.start_queue_runners(sess)\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow - The source code of the tensorflow_inception_graph.pb", + "input": "", + "output": "

This code has not yet been released. As the release announcement for the latest ImageNet model mentions, the team is working on releasing the complete Python training framework for this type of model, including the code for building the graph.

\n", + "system": "" + }, + { + "instruction": "Caching Computations in TensorFlow", + "input": "", + "output": "

The canonical way to share computed values across sess.Run() calls is to use a Variable. In this case, you could set up your graph so that when the Placeholders are fed, they compute a new value of the representation that is saved into a Variable. A separate portion of the graph reads those Variables to compute the loss. This will not work if you need to compute gradients through the part of the graph that computes the representation. Computing those gradients will require recomputing every Op in the encoder.

\n", + "system": "" + }, + { + "instruction": "Virtualenv can not inherit GetSitePackages() Attribute", + "input": "", + "output": "

This seems to be a problem with sites.py which dates all the way back to 2012. As mentioned here.

\n\n

Have a stab at trying to create the virtualenv using a different python version. For example:

\n\n
virtualenv -p python3 virtualenvname\n
\n\n

It's worth checking what python version you're running (python --version). This seems to only be a problem with python2.7 - earlier versions like python2.6 do not experience this problem however they lack a lot of useful packages that were added in python2.7.

\n\n

My recommendation would be to run it under python3 or python3.4.\nTensorFlow seems to support python3 with the 0.6.0 release.

\n\n

Hope this helps!

\n", + "system": "" + }, + { + "instruction": "Feeding parameters into placeholders in tensorflow", + "input": "", + "output": "

The inputs should be numpy arrays.

\n\n

So, instead of tf.Variable(tf.random_normal([K])), simply write np.random.randn(K) and everything should work as expected.

\n\n

EDIT (The question was clarified after my answer):

\n\n

It is possible to use placeholders as parameters but in a slightly different way. For example:

\n\n
lchild = tf.placeholder(tf.float32, shape=(K))\nrchild = tf.placeholder(tf.float32, shape=(K))\nparent = tf.nn.tanh(tf.add(lchild, rchild))\nloss = <some loss that depends on the parent tensor or lchild/rchild>\n# Compute gradients with respect to the input variables\ngrads = tf.gradients(loss, [lchild, rchild])\n\ninputs = [np.random.randn(K), np.random.randn(K)]\nfor i in range(<number of iterations>):\n    np_grads = sess.run(grads, feed_dict={lchild:inputs[0], rchild:inputs[1])\n    inputs[0] -= 0.1 * np_grads[0]\n    inputs[1] -= 0.1 * np_grads[1]\n
\n\n

It is not however the best or easiest way to do this. The main problem with it is that at every iteration you need to copy numpy arrays in and out of the session (which is running potentially on a different device like GPU).

\n\n

Placeholders generally are used to feed the data external to the model (like texts or images). The way to solve it using tensorflow utilities would be something like:

\n\n
lchild = tf.Variable(tf.random_normal([K])\nrchild = tf.Variable(tf.random_normal([K])\nparent = tf.nn.tanh(tf.add(lchild, rchild))\nloss = <some loss that depends on the parent tensor or lchild/rchild>\ntrain_op = tf.train.GradientDescentOptimizer(loss).minimize(0.1)\n\nfor i in range(<number of iterations>):\n    sess.run(train_op)\n\n# Retrieve the weights back to numpy:\nnp_lchild = sess.run(lchild)\n
\n", + "system": "" + }, + { + "instruction": "How to print flag descriptions in Tensorflow?", + "input": "", + "output": "

The flags module used in TensorFlow is a wrapper around the python-gflags module. To see a list of all flags used in a Python application using python-gflags, you can run it with the -h or --help flag. For example:

\n\n
$ tensorboard -h\nusage: tensorboard [-h] [--logdir LOGDIR] [--debug DEBUG] [--nodebug]\n                   [--host HOST] [--port PORT]\n\noptional arguments:\n  -h, --help       show this help message and exit\n  --logdir LOGDIR  logdir specifies where TensorBoard will look to find\n                   TensorFlow event files that it can display. In the simplest\n                   case, logdir is a directory containing tfevents files.\n                   TensorBoard also supports comparing multiple TensorFlow\n                   executions: to do this, you can use directory whose\n                   subdirectories contain tfevents files, as in the following\n                   example: foo/bar/logdir/\n                   foo/bar/logdir/mnist_1/events.out.tfevents.1444088766\n                   foo/bar/logdir/mnist_2/events.out.tfevents.1444090064 You\n                   may also pass a comma seperated list of log directories,\n                   and you can assign names to individual log directories by\n                   putting a colon between the name and the path, as in\n                   tensorboard\n                   --logdir=name1:/path/to/logs/1,name2:/path/to/logs/2\n  --debug DEBUG    Whether to run the app in debug mode. This increases log\n                   verbosity to DEBUG.\n  --nodebug\n  --host HOST      What host to listen to. Defaults to allowing remote access,\n                   set to 127.0.0.1 to serve only on localhost.\n  --port PORT      What port to serve TensorBoard on.\n
\n", + "system": "" + }, + { + "instruction": "How can I determine several labels in parallel (in a neural network) by using a softmax-output-layer in tensorflow?", + "input": "", + "output": "

You could apply the tf.split function to obtain 91 tensors (one for each class), then apply softmax to each of them.

\n\n
classes_split = tf.split(0, 91, all_in_one)\n\nfor c in classes_split:\n    softmax_class = tf.nn.softmax(c)\n    # use softmax_class to compute some loss, add it to overall loss\n
\n\n

or instead of computing the loss directly, you could also concatenate them together again:

\n\n
classes_split = tf.split(0, 91, all_in_one)\n\n# softmax each split individually\nclasses_split_softmaxed = [tf.nn.softmax(c) for c in classes_split]\n# Concatenate again\nall_in_one_softmaxed = tf.concat(0, classes_split_softmaxed)\n
\n", + "system": "" + }, + { + "instruction": "AttributeError: 'module' object has no attribute 'MutableMapping'", + "input": "", + "output": "

This sounds like an incompatibility between TensorFlow and the version of Protocol Buffers that's installed on your machine. The two best options are:

\n\n
    \n
  1. Try to upgrade the Protobuf library in /usr/local/lib/python2.7/dist-packages/google/protobuf/ to version 3.0.0a3 or higher.

  2. \n
  3. Install TensorFlow in a virtualenv, by following the instructions here. This should install the appropriate version of protobuf alongside TensorFlow.

  4. \n
\n", + "system": "" + }, + { + "instruction": "Cannot import tensorflow in python after source build", + "input": "", + "output": "

You need to take a few more steps before you can import TensorFlow in a Python shell: just building //tensorflow/cc:tutorials_example_trainer does not build any of the Python front-end.

\n\n

The easiest way to do this from a source installation is to follow the instructions for building a PIP package from source, and then installing it (either globally on your machine, or in a virtualenv).

\n", + "system": "" + }, + { + "instruction": "does word2vec tutorial example imply potential sub-optimal implementation?", + "input": "", + "output": "

Have you guys ever do any internal profiling to figure out what is the performance bottleneck of the standard version?\nA: Yes. We did. The profiling led our decision to write the optimized version.

\n\n

Would the performance bug be a threat to the training performance of other neural networks in general?\nA: It's a complex question and the answer depends on scenario. I'd not make such a generalization. In many other models (at least for some that I played with), the computation is often dominated by \"heavy operations\" like matmul, convolution, etc. In those models, the loss and its gradient's computation is relatively cheap. On the other hand, word2vec is very special in that the training step is basically embedding lookup + loss's gradient + apply gradient, because these operations are not \"fused\", executing these operations incurs much higher memory bandwidth. TF plans to develop compiler optimization techniques to fuse such operations automatically and that will avoid , to some degree, the need to manually fuse operations to optimize performance.

\n", + "system": "" + }, + { + "instruction": "How to Update Tensorflow from source", + "input": "", + "output": "

To install the TensorFlow library from source, you need to build a PIP package and install it. The steps are as follows:

\n\n
$ git pull\n$ ./configure\n\n$ bazel build -c opt //tensorflow/tools/pip_package:build_pip_package\n# ...or, with GPU support\n$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package\n\n$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg\n\n# The name of the .whl file will depend on your platform.\n$ pip install /tmp/tensorflow_pkg/tensorflow-0.6.0-cp27-none-linux_x86_64.whl\n
\n", + "system": "" + }, + { + "instruction": "Why my VAE for toy dateset doesn't learn?", + "input": "", + "output": "

One very common problem that occur in VAEs is 'Posterior Collapse' (see here: https://datascience.stackexchange.com/questions/48962/what-is-posterior-collapse-phenomenon).

\n\n

If this is indeed the case, I suggest looking at the solution here:\nhttps://www.quora.com/How-do-you-fix-a-Variational-Autoencoder-VAE-that-suffers-from-mode-collapse

\n\n

It worked out well for me.

\n", + "system": "" + }, + { + "instruction": "Apply 1 channel mask to 3 channel Tensor in tensorflow", + "input": "", + "output": "

The tf.mul() operator supports numpy-style broadcasting, which would allow you to simplify and optimize the code slightly.

\n

Let's say that zero_one_mask is an m x n tensor, and output_img is a b x m x n x 3 (where b is the batch size - I'm inferring this from the fact that you split output_img on dimension 3)*. You can use tf.expand_dims() to make zero_one_mask broadcastable to channels, by reshaping it to be an m x n x 1 tensor:

\n
with tf.variable_scope('apply_mask') as scope:\n  # Output mask is in range [-1, 1], bring to range [0, 1] first\n  # NOTE: Assumes `output_mask` is a 2-D `m x n` tensor.\n  zero_one_mask = tf.expand_dims((output_mask + 1) / 2, 2)\n  # Apply mask to all channels.\n  # NOTE: Assumes `output_img` is a 4-D `b x m x n x c` tensor.\n  output_img = tf.mul(output_img, zero_one_mask)\n
\n

(* This would work equally if output_img were a 4-D b x m x n x c (for any number of channels c) or 3-D m x n x c tensor, due to the way broadcasting works.)

\n", + "system": "" + }, + { + "instruction": "Can't optimize multivariate linear regression in Tensorflow", + "input": "", + "output": "

Your learning rate is too high, so the solution is jumping back and forth, farther and farther.

\n\n

It's generally good practice for a problem like this to normalize your input ranges, for example, so they have mean(0) and var(1).

\n", + "system": "" + }, + { + "instruction": "Using RNN tensorflow language model to predict the probabilities of test sentences", + "input": "", + "output": "

I had the same question and I think I found a way around it but I am not an expert so comments are welcomed!

\n\n

In the PTBModel class, you need to add this line:

\n\n
    self._proba = tf.nn.softmax(logits)\n
\n\n

before (or within) this loop:

\n\n
    if not is_training:\n        return\n
\n\n

and also add this property:

\n\n
      @property\n      def proba(self):\n          return self._proba\n
\n\n

Now in the run_epoch function you can get the probabilities using something like:

\n\n
    cost, state, proba, _ = session.run([m.cost, m.final_state, m.proba, eval_op],...\n
\n\n

From here you should have access to all the probabilities with proba. There may be a better way ...\nhope this help !

\n", + "system": "" + }, + { + "instruction": "How do we recognize text on real image after training MNIST with TensorFlow?", + "input": "", + "output": "

It depends on your specific task. The MNIST model can classify character digits and so that is the data you need to feed it.

\n\n

If you insist on using the MNIST model (RNNs specifically LSTMs are a much better option which most OCRs use.) one approach would be to run a sliding window over your hand written text image and create a text file of the character predicted by your model. But that presents its own set of challenges like novelty detection and slide window size. Its an overkill

\n", + "system": "" + }, + { + "instruction": "tf.matmul doesn't works as expected", + "input": "", + "output": "

As with standard matrix multiplication, if A has shape [m, k], and B has shape [k, n], then tf.matmul(A, B) has shape [m, n] (m rows, n columns in the order TensorFlow uses).

\n\n

In your program, you are computing tf.matmul(X, W). X is defined to be a placeholder with shape [2, 1]; W is defined to be variable initialized to a [1, 2] matrix of zeros. As a result, mulres = tf.matmul(X, W) will have shape [2, 2], which is what is printed (result: ...) when I run your code locally.

\n\n

If you want to define a hidden layer with a single output, the change is simple:

\n\n
W = tf.Variable(tf.zeros([1,2]), name=\"weight\")\n
\n\n

...should be replaced with:

\n\n
W = tf.Variable(tf.zeros([2, 1]), name=\"weight\")\n
\n\n

(Indeed, initializing your weights to tf.zeros will prevent it from training, because all of the input elements will get the same gradient in backpropagation. Instead, you should initialize them randomly, for example using:

\n\n
W = tf.Variable(tf.truncated_normal([2, 1], stddev=0.5), name=\"weight\")\n
\n\n

This will enable the network to learn different values for each component of the weight.)

\n", + "system": "" + }, + { + "instruction": "Retrieving data from TensorFlow object - list of booleans from correct_prediction", + "input": "", + "output": "

The tensor variables actually describe the computations that must be performed in order to obtain the values you are interested in.

\n\n

In other words, the tensor defined with correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) doesn't contain the list of booleans, it contains the instructions for computing it in a tensorflow graph. In order to get the actual values, you need to tell tensorflow to compute it in a graph.

\n\n

First, you need a tf.Session variable. A simple way to get it for testing in a interactive shell is sess = tf.InteractiveSession(), followed by variable initialization: sess.run(tf.initialize_all_variables()).

\n\n

Then, you can call sess.run(tensor_variable) to compute the value of a given tensor (or a list of them). If your tensors include placeholders in their computations (which they usually do), you'll also have to provide a feed dictionary. This is exemplified in the tutorial.

\n\n

Instead of session.run(), you can also call the .eval() method from tensors. This also needs that a default session exists.

\n", + "system": "" + }, + { + "instruction": "TensorFlow, how to look inside 'blob', the response in through CNN", + "input": "", + "output": "

You can use session.run to get the current values of elements in your computation graph.

\n\n
layer7_values = session.run(layer7_tf, feed_dict={<your inputs>})\n
\n\n

In this example session is a tf.Session() object. layer7_tf is a reference to the Tensor output of a layer in the TensorFlow model, and layer7_values will contain the layer values for the given input as a numpy array.

\n\n

To get a handle to layer7_tf, you have a couple of options. You can either modify TensorFace/vggface/init.py to return a reference to the appropriate layer; or you can explore the session.graph_def structure to find the name of the node that corresponds to that tensor, and pass the string name of the tensor (e.g. layer7_tf/foo/bar:0, where the :0 corresponds to the 0th output of the op called layer7_tf/foo/bar) to session.run() instead.

\n", + "system": "" + }, + { + "instruction": "See the contents of Checkpoint files?", + "input": "", + "output": "

A checkpoint file is an sstable. The value for each record is a serialized\nSavedTensorSlices message. (resource here)

\n\n

To see the content of the serialized SavedTensorSlices message, we just unerialize the content into a SavedTensorSlices object. Something like below:

\n\n
SavedTensorSlices message;\nmessage.ParseFromString(value);\ncout << message.DebugString();\n
\n", + "system": "" + }, + { + "instruction": "undefined symbol: clock_gettime with tensorflow on ubuntu14.04", + "input": "", + "output": "

Try adding the rt library during the linking step, like in this diff:

\n\n
--- a/tensorflow/tensorflow.bzl\n+++ b/tensorflow/tensorflow.bzl\n@@ -284,7 +284,7 @@ _py_wrap_cc = rule(attrs={\n\n def tf_extension_linkopts():\n-  return []  # No extension link opts\n+  return [\"-lrt\"]\n
\n\n

See also this github issue.

\n", + "system": "" + }, + { + "instruction": "Recurrent Mixture Density Networks for Tensorflow?", + "input": "", + "output": "

I looks like someone made a version of the handwriting generation code in Tensorflow. It includes code for a 2D Recurrent Mixture Density Network.

\n", + "system": "" + }, + { + "instruction": "Installing tensor flow on mac", + "input": "", + "output": "

This may not be exactly the solution you're looking for, but I personally spent the last two hours diagnosing a similar issue where TensorFlow wouldn't work after correctly installing and having its packages satisfied.

\n\n

It was throwing one of the strange _IO module import errors that seemed to be saying python was working correctly, and I ended up tracing it back all the way to the System Integrity Protection issue introduced in El Cap.

\n\n

See here for easy fix that will let pip work correctly again.

\n\n
\n

For those that are curious, Apple introduced SIP to make sure no user accidentally hurts system files, but in the process it makes it trickier for users to run commands as root

\n
\n\n

Also, remember to run

\n\n
sudo easy_install --upgrade six\n
\n\n

and

\n\n
sudo pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl\n
\n\n

After disabling SIP to make sure you patch up any requirements. You may need to uninstall some pip packages if this fails for some reason.

\n\n

As a final note, you should be off the ground if, in the python shell, you can

\n\n
import tensorflow as tf\n
\n\n

and it doesn't throw any errors.

\n\n

pythontensorflowelcapitan

\n", + "system": "" + }, + { + "instruction": "Using Sparse Tensors to feed a placeholder for a softmax layer in TensorFlow", + "input": "", + "output": "

I have found a way of putting sparse images into tensorflow including batch processing if that is of any help.

\n\n

I create a 4-d sparse matrix in a dictionary where the dimensions are batchSize, xLen, ylen, zLen (where zLen is 3 for colour for example). The following pseudo code is for a batch of 50 32x96 pixel 3-color images. Values are the intensity of each pixel. In the snippet below I show the first 2 pixels of the first batch being initialised...

\n\n
shape = [50, 32, 96, 3]\nindices = [[0, 20, 31, 0],[0, 22, 33, 1], etc...]\nvalues = [12, 24, etc...]\nbatch = {\"indices\": indices, \"values\": values, \"shape\": shape}\n
\n\n

When setting up the computational graph I create a sparse-placeholder of the correct dimensions

\n\n
images = tf.sparse_placeholder(tf.float32, shape=[None, 32, 96, 3])\n
\n\n

'None' is used so I can vary the batch size.

\n\n

When I first want to use the images, e.g. to feed into a batch convolution, I convert them back to a dense tensor:

\n\n
images = tf.sparse_tensor_to_dense(batch) \n
\n\n

Then when I am ready to run a session, e.g. for training, I pass the 3 components of the batch into the dictionary so that they will be picked up by the sparse_placeholder:

\n\n
train_dict = {images: (batch['indices'], batch['values'], batch['shape']), etc...}\nsess.run(train_step, feed_dict=train_dict)                \n
\n\n

If you are not needing to do batch processing just leave off the first dimension and remove 'none' from the placeholder shape.

\n\n

I couldn't find any way of passing the images across in batch as an array of sparse matrices. It only worked if I created the 4th dimension. I'd be interested to know of alternatives.

\n\n

Whilst this doesn't give an exact answer to your question I hope it is of use as I have been struggling with similar issues.

\n", + "system": "" + }, + { + "instruction": "Bazel: Use swig from non-standard location", + "input": "", + "output": "

That isn't Bazel-specific, it's part of the tensorflow code:

\n\n

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/swig/swig.sh

\n\n

You can just modify that file before you build.

\n\n

The specific build rule that uses that file is also defined by tensorflow, since it's not a rule included with the core Bazel distribution:

\n\n

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tensorflow.bzl#L275

\n", + "system": "" + }, + { + "instruction": "Problems installing TensorFlow on Mac", + "input": "", + "output": "

I do not want to use virtualenv, since anaconda already comes with its own environment management conda. When installing the newest version 0.6.0 directly with pip install, I had a similar error. It seemed to not resolve the dependencies correctly.

\n\n

Here is what you can try:

\n\n
    \n
  1. Install anaconda
  2. \n
  3. Create a new conda workspace
  4. \n
  5. Download the specific protobuf version that tensorflow needs: https://pypi.python.org/pypi/protobuf/3.0.0a3
  6. \n
  7. Install it via sudo easy_install ~/Downloads/protobuf-3.0.0a3-py2.7.egg
  8. \n
  9. Install a numpy version greater than 1.08.x via conda install numpy
  10. \n
  11. Download the 0.6.0 version of tensorflow: https://storage.googleapis.com/tensorflow/mac/tensorflow-0.6.0-py2-none-any.whl
  12. \n
  13. Install via pip install ~/Downloads/tensorflow-0.6.0-py2-none-any.whl
  14. \n
\n\n

When you install tensorflow from the whl file directly, it should tell you when dependencies are not there. It seems not to be able to resolve these conflicts independently. My setup had issues with protobuf and numpy. After installing them manually everything worked fine.

\n\n

I hope this helps!

\n", + "system": "" + }, + { + "instruction": "Using squared difference of two images as loss function in tensorflow", + "input": "", + "output": "

There are a lot of ways you can end up with non-finite results.

\n\n

But optimizers, especially simple ones like gradient descent, can diverge if the learning rate is 'too high'.

\n\n

Have you tried simply dividing your learning rate by 10/100/1000? Or normalizing by pixels*batch_size to get the average error per pixel?

\n\n

Or one of the more advanced optimizers? For example tf.train.AdamOptimizer() with default options.

\n", + "system": "" + }, + { + "instruction": "Using tensorflow for sequence tagging : Synced sequence input and output", + "input": "", + "output": "

If your input and output sequences are the same length you probably want something simpler than a seq2seq model (since handling different sequence lengths is one of it's strengths)

\n\n

Have you tried just training (word -> tag) ?

\n\n

\"tokens

\n\n

note: that for something like pos tagging where there is clear signal from tokens on either side you'll definitely get a benefit from a bidirectional net.

\n\n

\"bidir

\n\n

If you want to go all crazy there would be some fun character level variants too where you only emit the tag at the token boundary (the rationale being that pos tagging benefits from character level features; e.g. things like out of vocab names). So many variants to try! :D

\n\n

\"enter

\n", + "system": "" + }, + { + "instruction": "MNIST For ML Beginners - Why is "one-hot" vector length 11?", + "input": "", + "output": "

This is just a typo, do not worry about it (actually this is not the only typo there). If you will check the shape of the data mnist.train.labels.shape you will get (55000, 10) which is not the same as they claim (60000, 10).

\n\n

Also this shape of the data shows you that the length of one-hot vector is 10.

\n", + "system": "" + }, + { + "instruction": "Using Queues to uniformly sample from multiple input files", + "input": "", + "output": "

The correct answer to this question will depend on how many files you have, how large they are, and how their sizes are distributed.

\n\n

The immediate problem with your example is that rq only gets one element for each filename in filenames, then the queue is closed. I'm presuming that there are 10 filenames, since rq.dequeue() will consume one element of rq each time you call sess.run([label, image]). Since the queue is closed, no more elements can be added, and the 11th activation of the rq.dequeue() operation fails.

\n\n

The general solution is that you have to create additional threads to keep running rq.enqueue([out_string]) in a loop. TensorFlow includes a QueueRunner class that is designed to simplify this, and some other functions that handle common cases. The documentation for threading and queues explains how they are used, and there is also some good information on using queues to read from files.

\n\n

As to your particular problem, one way you could handle this would be to create N readers (for each of N files). You could then tf.pack() N elements (one from each reader) into a batch, and use enqueue_many to add a batch at a time into a tf.RandomShuffleQueue with a sufficiently large capacity and min_after_dequeue to ensure that there is sufficient mixing between the classes. Calling dequeue_many(k) on the RandomShuffleQueue would give you a batch of k elements sampled from each file with equal probability.

\n", + "system": "" + }, + { + "instruction": "Can someone help me with TensorFlow?", + "input": "", + "output": "

The best solution I have found is:

\n\n

https://github.com/google/skflow

\n\n

Charles

\n", + "system": "" + }, + { + "instruction": "Tensorflow - no module named 'embeddings' in tensorflow.models.embeddings", + "input": "", + "output": "

For now, the more full-featured versions of word2vec such as word2vec.py require building from source.

\n", + "system": "" + }, + { + "instruction": "ImportError: No module named core.framework.graph_pb2", + "input": "", + "output": "

I have met the exactly same issue. once your have installed the tensorflow successfully, it's not about the library dependency anymore.

\n\n

if your executed the convolution.py 100% accurately as manual and get the exception like below

\n\n
ImportError: No module named core.framework.graph_b2\n
\n\n

this means you are executing the python script exactly under the cloned project root directory,let's say the root named \"src\".

\n\n
src$python tensorflow/models/image/mnist/convolutional.py\n
\n\n

please try to execute the script in the parent directory of the cloned root directory. for example, if your just clone the tensorflow under src dir, goto its parent dir like xxx and do it again.

\n\n
xxx$python src/tensorflow/models/image/mnist/convolutional.py\n
\n\n

bingo, it works like a charm!

\n", + "system": "" + }, + { + "instruction": "How to deal with inputs outside 0-1 range in tensorflow?", + "input": "", + "output": "

The issue is that the example uses a very aggressive learning rate:

\n\n
optimizer = tf.train.GradientDescentOptimizer(0.5)\n
\n\n

This makes learning faster, but stops working if you change the problem a bit. A learning rate of 0.01 would be more typical:

\n\n
optimizer = tf.train.GradientDescentOptimizer(0.01)\n
\n\n

Now your modification works fine. :)

\n", + "system": "" + }, + { + "instruction": "Selectively registering the backward pass of a set of ops on the GPU", + "input": "", + "output": "\n\n

Currently there's no good way to customize the device assignment for ops in the (automatically generated) gradient computation. However, one thing you can do is to register a \"device function\" using with tf.device():, (though the documentation for this function applies and is more comprehensive). A \"device function\" is a function that takes a newly-constructed tf.Operation and returns a device name, and TensorFlow assigns the operation to that device. This enables you to do the following:

\n\n
# These are almost certainly faster on GPU, but are just shown as an example.\nOPS_ON_CPU = set([\"AvgPool\", \"AvgPoolGrad\"])\n\ndef _device_function(op):\n  if op.type in OPS_ON_CPU:\n    return \"/cpu:0\"\n  else:\n    # Other ops will be placed on GPU if available, otherwise CPU.\n    return \"\"\n\nwith tf.device(_device_function):\n  # Build model in here.\n  # ...\n  loss = ...\n  train_op = tf.train.AdamOptimizer(0.01).minimize(loss) \n
\n\n

...which will place all ops with type \"AvgPool\" or \"AvgPoolGrad\" on the CPU.

\n", + "system": "" + }, + { + "instruction": "Tensorflow Setup for Distributed Computing", + "input": "", + "output": "

The open-source version (currently 0.6.0) of TensorFlow supports single-process execution only: in particular, the only valid target in the tensorflow::SessionOptions is the empty string, which means \"current process.\"

\n\n

The TensorFlow whitepaper describes the structure of the distributed implementation (see Figure 3) that we use inside Google. The basic idea is that the Session interface can be implemented using RPC to a master; and the master can partition the computation across a set of devices in multiple worker processes, which also communicate using RPC. Alas, the current version depends heavily on Google-internal technologies (like Borg), so a lot of work remains to make it ready for external consumption. We are currently working on this, and you can follow the progress on this GitHub issue.

\n\n

EDIT on 2/26/2016: Today we released an initial version of the distributed runtime to GitHub. It supports multiple machines and multiple GPUs.

\n", + "system": "" + }, + { + "instruction": "How can I create/run benchmarks for custom kernels in tensorflow?", + "input": "", + "output": "

To invoke the benchmarks, run the following command (passing --benchmarks=all as the final argument):

\n\n
$ bazel run -c opt //tensorflow/core:kernels_adjust_contrast_op_benchmark_test \\\n      --test_output=all --cache_test_results=no -- --benchmarks=all\n
\n\n

To run GPU benchmarks, you must pass --config=cuda to bazel and append _gpu to the name of the test target. For example:

\n\n
$ bazel run -c opt --config=cuda \\ \n     //tensorflow/core:kernels_adjust_contrast_op_benchmark_test_gpu \\\n     --test_output=all --cache_test_results=no -- --benchmarks=all\n
\n", + "system": "" + }, + { + "instruction": "In Tensorflow's C++ TensorShape API, what is the equivalent of Python's None?", + "input": "", + "output": "

Update: There is now a tensorflow::PartialTensorShape class that can represent shapes with unknown dimensions or an unknown rank. The value -1 is used to represent an unknown value (i.e. what None represents in Python). It is used in the C++ shape inference code, and can be specified in a shape-typed attr or a tf.TensorShape proto.

\n\n
\n\n

TL;DR: There is no equivalent in C++ because the C++ part of TensorFlow only checks shapes at runtime, when they are fully defined; whereas the Python part checks shapes at graph-construction time, when they might not be fully defined.

\n\n

There is no equivalent of tf.Dimension(None) (i.e., an unknown dimension) in the C++ tensorflow::TensorShape class. This is because the (C++) tensorflow::TensorShape class describes the shape of a (C++) tensorflow::Tensor, which represents a concrete value for a tensor, and therefore must have a fully defined shape. The Python tf.Tensor class represents a symbolic tensor—representing the output of an operation that has yet to be run—and so it can have a shape that is unknown in one or more dimensions.

\n\n

If you are using the C++ API to feed a placeholder, you should simply create a new tensorflow::Tensor with a fully defined shape for each different value that you feed to the placeholder (in a Session::Run() call). Note however that the C++ API does not check shapes of placeholders, so you should manually ensure that the shape matches the expected shape for the placeholder.

\n\n

If you are building a graph using the C++ API, and you want to define a placeholder with an unknown size in one or more dimensions, you should define a placeholder node with its shape attr set to tensorflow::TensorShape({}). Although this is equivalent to a scalar, for historical reasons this is treated as the shape being completely unconstrained.

\n", + "system": "" + }, + { + "instruction": "adding one hot encoding throws error in previously working code in Tensorflow", + "input": "", + "output": "

Neural networks can only be trained if all the operations they perform are differentiable. The \"one-hot\" step you apply is not differentiable, and hence such a neural network cannot be trained using any gradient descent-based optimizer (= any optimizer that tensor flow implements).

\n\n

The general approach is to use softmax (which is differentiable) during training to approximate one-hot encoding (and your model already has softmax following computing logits, so commenting out the \"one-hot\" is actually all you need to do).

\n", + "system": "" + }, + { + "instruction": "TensorFlow example for text classification - how to evaluate your own text?", + "input": "", + "output": "

Here is an example: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/skflow/text_classification.py

\n\n

You can set the flag test_with_fake_data to use the fake data in text_train.csv (training samples) and text_test.csv (testing samples) here. Next, you can modify these two files to include whatever data you'd like to have. You will need to do some preprocessing if your existing text files are in a different format.

\n", + "system": "" + }, + { + "instruction": "Tensorflow backprop through rnn ValueError", + "input": "", + "output": "

The 0.6.0 (and earlier) release of TensorFlow had a bug in the implementation of gradients for tf.nn.embedding_lookup() and tf.gather() when the indices argument (self._input_data in your code) had more than one dimension.

\n\n

Upgrading to the latest source release should fix this error. Otherwise, this commit has the relevant change (to array_grad.py) that will enable your program to work.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: No shape function registered for standard op: ExtractGlimpse. Where do I add my code for the shape function?", + "input": "", + "output": "

This looks like a bug in TensorFlow: the shape function is defined in the correct place, but the code in attention_ops.py is never executed, so the shape function is never registered.

\n\n

I will fix it upstream, but in the meantime you can fix it by adding the following line to your program:

\n\n
from tensorflow.python.ops import attention_ops\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow image reading empty", + "input": "", + "output": "

The image will be properly loaded, but TensorFlow doesn't have enough information to infer the image's shape until the op is run. This arises because tf.image.decode_jpeg() can produce tensors of different shapes (heights and widths), depending on the contents of the string tensor value. This enables you to build input pipelines using a collection of images with different sizes.

\n\n

The Dimension(None) in the shape means \"unknown\" rather than \"empty\".\nIf you happen to know that all images read by this operation will have the same size, you can use Tensor.set_shape() to provide this information, and doing so will help to validate the shapes of later parts of the graph:

\n\n
my_img = tf.image.decode_jpeg(value, channels=3)    \nKNOWN_HEIGHT = 28\nKNOWN_WIDTH = 28\nmy_img.set_shape([KNOWN_HEIGHT, KNOWN_WIDTH, 3])\n\nprint(my_img)\n# Output: Tensor(\"DecodeJpeg:0\", shape=TensorShape([Dimension(28), Dimension(28), Dimension(3)]), dtype=uint8)\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow MaxPool doesn't accept float64", + "input": "", + "output": "

Support for tf.nn.max_pool() for types other than single-precision floating point values is currently not implemented, and the documentation is incorrect. (I have updated it upstream, and it should appear on GitHub and the website soon.)

\n\n

The reason for the incompatibility is that TensorFlow has a specialized implementation of max-pooling on GPUs for performance reasons, and we almost always use tf.float32 when training deep networks, so there isn't equivalent support for the other types. It would be possible to add, so contributions are welcome: see the GitHub issue for more discussion.

\n", + "system": "" + }, + { + "instruction": "Synchronous vs asynchronous computation in Tensorflow", + "input": "", + "output": "

Suppose you have n workers.

\n\n

Asynchronous means that each worker just reads parameters, computes updates, and writes updated parameters, without any locking mechanism at all. The workers can overwrite each other's work freely.\nSuppose worker 1 is slow for some reason. Worker 1 reads parameters at time t, and then tries to write updated parameters at time t+100. In the meantime, workers 2-n have all done a lot of updates at time step t+1, t+2, etc. When the slow worker 1 finally does its write, it overwrites all of the progress the other workers have made.

\n\n

Fully synchronous means that all the workers are coordinated. Every worker reads the parameters, computes a gradient, and then waits for the other workers to finish. Then the learning algorithm computes the average of all of the gradients they computed, and does an update based on that one average. If worker 1 is very slow and takes 100 time steps to finish, but workers 2-n all finish on time step 2, then most of the workers will spend most of the time sitting doing nothing waiting for worker 1.

\n", + "system": "" + }, + { + "instruction": "Error on restoring Tensorflow checkpoint file", + "input": "", + "output": "

You are trying to load variable that doesn't exist in the original network, I believe that omitting

\n\n
    v1 = tf.Variable(0)\n
\n\n

will solve the problem.

\n\n

In case that you want add a new variable, you need to load it differently, the loading method should be like that:

\n\n
reader = tf.train.NewCheckpointReader(os.path.join(checkpoint_dir, ckpt_name))\nrestore_dict = dict()\nfor v in tf.trainable_variables():\n    tensor_name = v.name.split(':')[0]\n    if reader.has_tensor(tensor_name):\n        print('has tensor ', tensor_name)\n        restore_dict[tensor_name] = v\n    # put the logic of the new/modified variable here and assign to the restore_dict, i.e. \n    # restore_dict['my_var_scope/my_var'] = get_my_variable()\n
\n", + "system": "" + }, + { + "instruction": "What is going wrong with the training and predictions using TensorFlow?", + "input": "", + "output": "

The answer is 2 fold.

\n\n

One problem is with the dimensions/parameters. The other problem is that the features are being placed in the wrong spot.

\n\n
W_conv1 = weight_variable([1, 2, 1, 80])\nb_conv1 = bias_variable([80])\n
\n\n

Notice the first two numbers in the weight_variable correspond to the dimensions of the input. The second two numbers correspond to the dimensions of the feature tensor. The bias_variable always takes the final number in the weight_variable.

\n\n

Second Convolutional Layer

\n\n
W_conv2 = weight_variable([1, 2, 80, 160])\nb_conv2 = bias_variable([160])\n
\n\n

Here the first two numbers still correspond to the dimensions of the input. The second two numbers correspond to the amount of features and the weighted network that results from the 80 previous features. In this case, we double the weighted network. 80x2=160. The bias_variable then takes the final number in the weight_variable. If you were to finish the code at this point, the last number in the weight_variable would be a 1 in order to prevent dimensional errors due to the shape of the input tensor and the output tensor. But, instead, for better predictions, let's add a third convolutional layer.

\n\n

Third Convolutional Layer

\n\n
W_conv3 = weight_variable([1, 2, 160, 1])\nb_conv3 = bias_variable([1])\n
\n\n

Once again, the first two numbers in the weight_variable take the shape of the input. The third number corresponds to the amount of the weighted variables we established in the Second Convolutional Layer. The last number in the weight_variable now becomes 1 so we don't run into any dimension errors on the output that we are predicting. In this case, the output has the dimensions of 1, 2.

\n\n
W_fc2 = weight_variable([80, 1024])\nb_fc2 = bias_variable([1024])\n
\n\n

Here, the number of neurons is 1024 which is completely arbitrary, but the first number in the weight_variable needs to be something that the dimensions of our feature matrix needs to be divisible by. In this case it can be any number (such as 2, 4, 10, 20, 40, 80). Once again, the bias_variable takes the last number in the weight_variable.

\n\n

At this point, make sure that the last number in h_pool3_flat = tf.reshape(h_pool3, [-1, 80]) corresponds to the first number in the W_fc2 weight_variable.

\n\n

Now when you run your training program you will notice that the outcome varies and won't always guess all 1's or all 0's.

\n\n

When you want to predict the probabilities, you have to feed x to the softmax variable-> y_conv=tf.nn.softmax(tf.matmul(h_fc2_drop, W_fc3) + b_fc3) like so-

\n\n
ans = sess.run(y_conv, feed_dict={x: x_test_actual, keep_prob: 1.0})\n
\n\n

You can alter the keep_prob variable, but keeping it at a 1.0 always produces the best results. Now, if you print out ans you'll have something that looks like this-

\n\n
[[ 0.90855026  0.09144982]\n [ 0.93020624  0.06979381]\n [ 0.98385173  0.0161483 ]\n [ 0.93948185  0.06051811]\n [ 0.90705943  0.09294061]\n [ 0.95702559  0.04297439]\n [ 0.95543593  0.04456403]\n [ 0.95944828  0.0405517 ]\n [ 0.99154049  0.00845954]\n [ 0.84375167  0.1562483 ]\n [ 0.98449463  0.01550537]\n [ 0.97772813  0.02227189]\n [ 0.98341942  0.01658053]\n [ 0.93026513  0.06973486]\n [ 0.93376994  0.06623009]\n [ 0.98026556  0.01973441]\n [ 0.93210858  0.06789146]\n
\n\n

Notice how the probabilities vary. Your training is now working properly.

\n", + "system": "" + }, + { + "instruction": "Running two tensorflow graphs at a time", + "input": "", + "output": "

The simplest option would be to build the two models in different graphs and start separate sessions for each graph. However, each session owns its devices (CPU and possibly GPU), so you would have separate thread pools for each model, and this might cause suboptimal scheduling behavior. Careful use of the tf.Session configuration options would be necessary to get good performance.

\n\n

Alternatively, you could combine the two models in the same graph and use a single session. As you point out, the variables for the two models would necessarily have different names. Therefore, to make this work, you would need to provide an explicit name-to-Variable mapping when you construct the tf.Saver for loading in the model.

\n", + "system": "" + }, + { + "instruction": "Event files in Google Tensorflow", + "input": "", + "output": "

The best solution from a TensorBoard perspective is to have a root directory for your experiment, e.g. ~/tensorflow/mnist_experiment, and then to create a new subdirectory for each run, e.g. ~/tensorflow/mnist_experiment/run1/...

\n\n

Then run TensorBoard against the root directory, and every time you invoke your code, setup the SummaryWriter pointing to a new subdirectory. TensorBoard will then interpret all of the event files correctly, and it will also make it easy to compare between your different runs.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: how to add user custom op accepting two 1D vec tensor and output a scalar?", + "input": "", + "output": "

The issue is that the predicts tensor in your Python program has type float, and your op registration accepts this as a valid type for the predicts input (since T1 can be float or double), but AucOp::Compute() assumes that the predicts input always has type double (in the call to predicts_tensor.flat<double>()). The tensorflow::Tensor class does not convert the type of elements in the tensor when you ask for values of a different type, and instead it raises a fatal error.

\n\n

There are several possible solutions:

\n\n
    \n
  1. To get things working quickly, you could change the type of predicts in your Python program to tf.float64 (which is a synonym for double in the Python front-end):

    \n\n
    predicts = tf.constant([0.8, 0.5, 0.12], dtype=tf.float64)\n
  2. \n
  3. You could start by defining a simpler op that accepts inputs of a single type only:

    \n\n
    REGISTER_OP(\"Auc\")\n.Input(\"predicts: double\")\n.Input(\"labels: int32\")\n...;\n
  4. \n
  5. You could add code in the AucOp::Compute() method to test the input type and access the input values as appropriate. (Use this->input_type(i) to find the type of the ith input.

  6. \n
  7. You could define a templated class AucOp<TPredict, TLabel>, then use TypeConstraint<> in the REGISTER_KERNEL_BUILDER call to define specializations for each of the four valid combinations of prediction and label types. This would look something like:

    \n\n
    REGISTER_KERNEL_BUILDER(Name(\"Auc\")\n                            .Device(DEVICE_CPU)\n                            .TypeConstraint<float>(\"T1\")\n                            .TypeConstraint<int32>(\"T2\"),\n                       AucOp<float, int32>);\n// etc. for AucOp<double, int32>, AucOp<float, int64>, and AucOp<double, int64>.\n
  8. \n
\n", + "system": "" + }, + { + "instruction": "Tensorflow, equivalent of Theano's pydotprint?", + "input": "", + "output": "

As @JHafdahl points out, TensorBoard provides graph visualization for TensorFlow graphs, which includes support for summarizing complex nested subgraphs.

\n\n

To visualize a graph, build a TensorFlow graph as normal, then add the following statements to your Python program:

\n\n
writer = tf.train.SummaryWriter(\"/path/to/logs\", tf.get_default_graph().as_graph_def())\nwriter.flush()\n
\n\n

Then, in a separate terminal, run TensorBoard to visualize your graph:

\n\n
$ tensorboard --logdir=/path/to/logs --port 6006\n
\n\n

Finally, connect to TensorBoard by opening http://localhost:6006 in your web browser. Clicking on the \"Graph\" tab will show the visualization of your graph; see the graph visualization tutorial for more details.

\n", + "system": "" + }, + { + "instruction": "Why is TensorFlow predicting all 0's or all 1's after training?", + "input": "", + "output": "

tl;dr: The way the sample code above posted computes the cross-entropy is not numerically robust. Use tf.nn.cross_entropy_with_logits instead.

\n\n

(in response to v1 of the question, which has changed): I'm worried that your training is not actually getting run to completion or working, based upon the nans in your x_train data that you showed. I'd suggest first fixing that - and identifying why they showed up and fixing that bug, and seeing if you also have nans in your test set. Might be helpful to show x_test and y_test also.

\n\n

Finally, I believe there's a bug in the way y_ is handled in relation to x. The code is written as if y_ is a one-hot matrix, but when you show y_train[:10], it only has 10 elements, not 10*num_classes categories. I suspect a bug there. When you argmax it on axis 1, you're always going to get a vector full of zeros (because there's only one element on that axis, so of course it's the maximum element). Combine that with a bug producing always-zero output on the estimate, and you're always producing a \"correct\" answer. :)

\n\n

Update for revised version\nIn the changed version, if you run it and print out W at the end of every execution by changing your code to look like this:

\n\n
 _, w_out, b_out = sess.run([train_step, W, b], feed_dict={x: [x_train[g]], y_: [y_train[g]]})\n
\n\n

you'll observe that W is full of nans. To debug this, you can either stare a lot at your code to see if there's a mathematical problem you can spot, or you can instrument back through the pipeline to see where they show up. Let's try that. First, what's the cross_entropy? (add cross_entropy to the list of things in the run statement and print it out)

\n\n
Cross entropy:  inf\n
\n\n

Great! So.. why? Well, one answer is that when:

\n\n
y = [0, 1]\ntf.log(y) = [-inf, 0]\n
\n\n

This is a valid possible output for y, but one that your computation of the cross-entropy is not robust to. You could either manually add some epsilons to avoid the corner cases, or use tf.nn.softmax_cross_entropy_with_logits to do it for you. I recommend the latter:

\n\n
yprime = tf.matmul(x,W)+b\ny = tf.nn.softmax(yprime)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(yprime, y_)\n
\n\n

I don't guarantee that your model will work, but this should fix your current NaN problem.

\n", + "system": "" + }, + { + "instruction": "Error running skflow example", + "input": "", + "output": "\n\n

I think this is an issue with the interface between skflow and pandas. Try calling .values on the data frames before passing them to skflow:

\n\n
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n\nlr = LogisticRegression()\nlr.fit(X_train.values, y_train.values)\nprint accuracy_score(lr.predict(X_test.values), y_test.values)\n\n# 3 layer neural network with rectified linear activation.\n\nrandom.seed(42)\nclassifier = skflow.TensorFlowDNNClassifier(hidden_units=[10, 20, 10],\n                                        n_classes=2, batch_size=128, steps=500,\n                                        learning_rate=0.05)\nclassifier.fit(X_train.values, y_train.values)\nprint accuracy_score(classifier.predict(X_test.values), y_test.values)\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow, binary classification", + "input": "", + "output": "

You should definitely use tensorboard to visualize the cross entropy , biases and weights summaries. I think it will give you a much better view of what is going on.

\n\n

Try with this code and then run tensorboard:

\n\n
import dataset as input_data\nimport tensorflow as tf\n\n\ndef weight_variable(shape):\n    initial = tf.truncated_normal(shape, stddev=0.1)\n    return tf.Variable(initial)\n\n\ndef bias_variable(shape):\n    initial = tf.constant(0.1, shape=shape)\n    return tf.Variable(initial)\n\n\ndef conv2d(x, W):\n    return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')\n\n\ndef max_pool_2x2(x):\n    return tf.nn.max_pool(x, ksize=[1, 3, 3, 1],\n                          strides=[1, 2, 2, 1], padding='SAME')\n\n\ndata = input_data.read_data_sets()\n\nsess = tf.InteractiveSession()\n\nx = tf.placeholder(\"float\", shape=[None, input_data.HEIGHT * input_data.WIDTH * 3])\ny_ = tf.placeholder(\"float\", shape=[None, 2])\n\nW_conv1 = weight_variable([5, 5, 3, 64])\nb_conv1 = bias_variable([64])\n\nx_image = tf.reshape(x, [-1, input_data.WIDTH, input_data.HEIGHT, 3])\n\nh_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)\nh_pool1 = max_pool_2x2(h_conv1)\nh_norm1 = tf.nn.lrn(h_pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)\n\nW_conv2 = weight_variable([5, 5, 64, 64])\nb_conv2 = bias_variable([64])\n\nh_conv2 = tf.nn.relu(conv2d(h_norm1, W_conv2) + b_conv2)\nh_norm2 = tf.nn.lrn(h_conv2, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)\nh_pool2 = max_pool_2x2(h_norm2)\n\nW_fc1 = weight_variable([input_data.HEIGHT / 4 * input_data.WIDTH / 4 * 64, 1024])\nb_fc1 = bias_variable([1024])\n\nh_pool2_flat = tf.reshape(h_pool2, [-1, input_data.HEIGHT / 4 * input_data.WIDTH / 4 * 64])\nh_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)\n\nkeep_prob = tf.placeholder(\"float\")\nh_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n\nW_fc2 = weight_variable([1024, 2])\nb_fc2 = bias_variable([2])\n\ny_conv = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)\n\n# Add summary ops to collect data\nw_fc2_hist = tf.histogram_summary(\"weights_fc2\", W_fc2)\nb_fc2_hist = tf.histogram_summary(\"bias_fc2\", b_fc2)\nw_in_hist = tf.histogram_summary(\"weights_in\", W)\nb_in_hist = tf.histogram_summary(\"bias_in\", b)\ny_hist = tf.histogram_summary(\"y_conv\", y_conv)\n\ncross_entropy = -tf.reduce_sum(y_ * tf.log(y_conv))\nce_summ = tf.scalar_summary(\"cross entropy\", cross_entropy)\ntrain_step = tf.train.AdamOptimizer(1e-6).minimize(cross_entropy)\ncorrect_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))\naccuracy_summary = tf.scalar_summary(\"train accuracy\", accuracy)\n# Merge all the summaries and write them out to /tmp/tf\nmerged = tf.merge_all_summaries()\nwriter = tf.train.SummaryWriter(\"/tmp/tf\", sess.graph_def)\nsess.run(tf.initialize_all_variables())\nfor i in range(20000):\n    batch = data.train.next_batch(50)\n    feed={x: batch[0], y_: batch[1], keep_prob: 0.5}\n    result = sess.run([merged, accuracy, train_step], feed_dict=feed)        \n    if i % 100 == 0:  # Record summary data, and the accuracy\n            summary_str = result[0]\n            acc = result[1]\n            writer.add_summary(summary_str, i)\n            print(\"Accuracy at step {0}/{1}: {2}%\".format(i, 20000, int(acc*100)))\n\nprint \"test accuracy %g\" % accuracy.eval(feed_dict={\n    x: data.test.images, y_: data.test.labels, keep_prob: 1.0})\n
\n", + "system": "" + }, + { + "instruction": "How to simulate network from tutorial", + "input": "", + "output": "

In tensor flow every node in the graph can be evaluated. So, similarly to how you evaluate the accuracy, you can also evaluate any other node in the graph. The entire neural network is represented by its last layer, in your case y_conv. I don't have TensorFlow handy, but something along the lines of the following line should work:

\n\n
y_conv.eval(feed_dict={x: mnist.test.images[0]})\n
\n", + "system": "" + }, + { + "instruction": "Tensor Flow shuffle_batch() blocks at end of epoch", + "input": "", + "output": "

Here's the code that I eventually got to work, although with a bunch of warnings that elements I enqueued were cancelled.

\n\n
lv = tf.constant(label_list)\n\nlabel_fifo = tf.FIFOQueue(len(filenames),tf.int32,shapes=[[]])\n# if eval_data:\n    # num_epochs = 1\n# else:\n    # num_epochs = None\nfile_fifo = tf.train.string_input_producer(filenames, shuffle=False, capacity=len(filenames))\nlabel_enqueue = label_fifo.enqueue_many([lv])\n\n\nreader = tf.WholeFileReader()\nresult.key, value = reader.read(file_fifo)\nimage = tf.image.decode_jpeg(value, channels=3)\nimage.set_shape([128,128,3])\nresult.uint8image = image\nresult.label = label_fifo.dequeue()\n\nimages, label_batch = tf.train.shuffle_batch(\n  [result.uint8image, result.label],\n  batch_size=FLAGS.batch_size,\n  num_threads=num_preprocess_threads,\n  capacity=FLAGS.min_queue_size + 3 * FLAGS.batch_size,\n  min_after_dequeue=FLAGS.min_queue_size)\n\n#in eval file:\nlabel_enqueue, images, labels = load_input.inputs()\n#restore from checkpoint in between\ncoord = tf.train.Coordinator()\ntry:\n  threads = []\n  for qr in tf.get_collection(tf.GraphKeys.QUEUE_RUNNERS):\n    threads.extend(qr.create_threads(sess, coord=coord, daemon=True,\n                                     start=True))\n\n  num_iter = int(math.ceil(FLAGS.num_examples / FLAGS.batch_size))\n  true_count = 0  # Counts the number of correct predictions.\n  total_sample_count = num_iter * FLAGS.batch_size\n\n  sess.run(label_enqueue)\n  step = 0\n  while step < num_iter and not coord.should_stop():\n    end_epoch = False\n    if step > 0:\n        for qr in tf.get_collection(tf.GraphKeys.QUEUE_RUNNERS):\n            #check if not enough elements in queue\n            size = qr._queue.size().eval()\n            if size - FLAGS.batch_size < FLAGS.min_queue_size:\n                end_epoch = True\n    if end_epoch:\n        #enqueue more so that we can finish\n        sess.run(label_enqueue)\n    #actually run step\n    predictions = sess.run([top_k_op])\n
\n", + "system": "" + }, + { + "instruction": "How to get predicted class labels in TensorFlow's MNIST example?", + "input": "", + "output": "

I think you just need to evaluate your output-tensor as stated in the tutorial:

\n\n
accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))\nprint(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))\n
\n\n

To get the output of a tensor see the docs:

\n\n
\n

After the graph has been launched in a session, the value of the Tensor can be computed by passing it to Session.run(). t.eval() is a shortcut for calling tf.get_default_session().run(t).

\n
\n\n

If you want to get predictions rather than accuracy, you need to evaluate your ouput tensor y in the same way:

\n\n
print(sess.run(y, feed_dict={x: mnist.test.images}))\n
\n", + "system": "" + }, + { + "instruction": "Eigendecomposition in tensorflow from python API", + "input": "", + "output": "

tf.self_adjoint_eig and tf.batch_self_adjoint_eig are all I see in the API, \nhttps://www.tensorflow.org/versions/master/api_docs/python/math_ops.html

\n", + "system": "" + }, + { + "instruction": "Using TensorFlow to classify Handwritten Digits without validation labels", + "input": "", + "output": "

Assuming you've followed the MNIST for ML Beginners Tutorial, to get a simple prediction, add an argmax node like so:

\n\n
prediction = tf.argmax(y, 1)\n
\n\n

And then run it, feeding in the data you'd like to classify:

\n\n
prediction_val = sess.run(prediction, feed_dict={x: dataset.images})\n
\n\n

prediction_val will be of shape (batch_size,) and contains the likeliest label for each image in the batch.

\n\n

Note that the feed_dict only includes x and not y_ because prediction does not depend on y_.

\n", + "system": "" + }, + { + "instruction": "Get a simple MLP in TensorFlow to model XOR", + "input": "", + "output": "

(0) It's helpful to include the error output - it's also a useful thing to look at, because it does identify exactly where you were having shape problems.

\n\n

(1) The shape errors arose because you have the arguments to matmul backwards in both of your matmuls, and have the tf.Variable backwards. The general rule is that the weights for layer that has input_size, output_size should be [input_size, output_size], and the matmul should be tf.matmul(input_to_layer, weights_for_layer) (and then add the biases, which are of shape [output_size]).

\n\n

So with your code,

\n\n
W_hidden = tf.Variable(tf.random_uniform([hidden_nodes, 2], -1.0, 1.0))\n
\n\n

should be:

\n\n
W_hidden = tf.Variable(tf.random_uniform([2, hidden_nodes], -1.0, 1.0))\n
\n\n

and

\n\n
hidden = tf.sigmoid(tf.matmul(W_hidden, n_input) + b_hidden)\n
\n\n

should be tf.matmul(n_input, W_hidden); and

\n\n
output = tf.sigmoid(tf.matmul(W_output, hidden))\n
\n\n

should be tf.matmul(hidden, W_output)

\n\n

(2) Once you've fixed those bugs, your run needs to be fed a feed_dict:

\n\n
sess.run(train)\n
\n\n

should be:

\n\n
sess.run(train, feed_dict={n_input: input_data}) \n
\n\n

At least, I presume that this is what you're trying to achieve.

\n", + "system": "" + }, + { + "instruction": "In Tensorflow, how do I generate a scalar summary?", + "input": "", + "output": "

The feed_dict should contain the same values that you use for running the training_op. It basically specifies the input values to your network for which you want to calculate the summaries.

\n", + "system": "" + }, + { + "instruction": "TensorFlow InvalidArgumentError when reading record for training on own image set", + "input": "", + "output": "

Moving Alexander's comment-answer to a real answer:

\n\n

1.) resize image into 1024x768

\n\n

2.) resize it again into IMAGE_HIGHT = 192 IMAGE_WIDTH = 256. (how you see it's just divided by 4.)

\n\n

OP used this code snippet.

\n\n
im = Image.open(source_folders[i]).resize((1024,768)) \nim = im.resize((256,192),Image.ANTIALIAS)\n
\n", + "system": "" + }, + { + "instruction": "How do I use TensorFlow to add Predicted Value to an empty column in a CSV file?", + "input": "", + "output": "

The program above doesn't appear to be saving the trained session. I think you want to do this in two steps.

\n\n
    \n
  1. Train and save the session
  2. \n
  3. Restore the save session, and run test data through it.
  4. \n
\n\n

Step 1:

\n\n
 #!/usr/bin/env python\n\n import tensorflow as tf\n import numpy as np\n from numpy import genfromtxt\n import sklearn\n\n # Convert to one hot\n def convertOneHot(data):\n     y=np.array([int(i[0]) for i in data])\n     y_onehot=[0]*len(y)\n     for i,j in enumerate(y):\n         y_onehot[i]=[0]*(y.max() + 1)\n         y_onehot[i][j]=1\n     return (y,y_onehot)\n\n # Build Example Data is CSV format, but use Iris data\n from sklearn import datasets\n from sklearn.model_selection import train_test_split\n def buildDataFromIris():\n     iris = datasets.load_iris()\n     X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.33, random_state=42)\n     f=open('cs-training.csv','w')\n     for i,j in enumerate(X_train):\n         k=np.append(np.array(y_train[i]),j   )\n         f.write(\",\".join([str(s) for s in k]) + '\\n')\n     f.close()\n     f=open('cs-test.csv','w')\n     for i,j in enumerate(X_test):\n         k=np.append(np.array(y_test[i]),j   )\n         f.write(\",\".join([str(s) for s in k]) + '\\n')\n     f.close()\n\n # Recreate logging and save dir\n # Seems the tensorflow won't always overwrite\n import shutil, os, sys\n TMPDir='./tensorTMP'\n try:\n  shutil.rmtree(TMPDir)\n except:\n  print \"Tmp Dir did not exist...that's okay\"\n os.mkdir(TMPDir, 0755 )\n\n\n\n # Populate the data\n buildDataFromIris()\n\n data = genfromtxt('cs-training.csv',delimiter=',')  # Training data\n test_data = genfromtxt('cs-test.csv',delimiter=',')  # Test data\n\n x_train=np.array([ i[1::] for i in data])\n y_train,y_train_onehot = convertOneHot(data)\n\n x_test=np.array([ i[1::] for i in test_data])\n y_test,y_test_onehot = convertOneHot(test_data)\n\n A=data.shape[1]-1 # Number of features, Note first is y\n B=len(y_train_onehot[0])\n tf_in = tf.placeholder(\"float\", [None, A]) # Features\n tf_weight = tf.Variable(tf.zeros([A,B]))\n tf_bias = tf.Variable(tf.zeros([B]))\n tf_softmax = tf.nn.softmax(tf.matmul(tf_in,tf_weight) + tf_bias)\n\n # Training via backpropagation\n tf_softmax_correct = tf.placeholder(\"float\", [None,B])\n tf_cross_entropy = -tf.reduce_sum(tf_softmax_correct*tf.log(tf_softmax))\n\n # Train using tf.train.GradientDescentOptimizer\n tf_train_step = tf.train.GradientDescentOptimizer(0.01).minimize(tf_cross_entropy)\n\n # Add accuracy checking nodes\n tf_correct_prediction = tf.equal(tf.argmax(tf_softmax,1), tf.argmax(tf_softmax_correct,1))\n tf_accuracy = tf.reduce_mean(tf.cast(tf_correct_prediction, \"float\"))\n\n saver = tf.train.Saver([tf_weight,tf_bias])\n\n # Initialize and run\n init = tf.initialize_all_variables()\n sess = tf.Session()\n sess.run(init)\n\n THRESHOLD = 0.98\n saved = False\n print(\"...\")\n # Run the training\n for i in range(100):\n     sess.run(tf_train_step, feed_dict={tf_in: x_train, tf_softmax_correct: y_train_onehot})\n     result = sess.run(tf_accuracy, feed_dict={tf_in: x_test, tf_softmax_correct: y_test_onehot})\n     # If it's well trained on this iteration, save it. We just need one save.\n     if result > THRESHOLD  and saved == False:\n         saved = True\n         print \"saving result {}\".format(result)\n         saver.save(sess,TMPDir +\"/savedSess\")\n
\n\n

The only modifications made were generating sample data using Iris, establishing a THRESHOLD or confidence interval for the session. If it's over that THRESHOLD, then, save the session. After running step one, the model should be trained and saved.

\n\n

Step 2:

\n\n

Restore the saved session, and run the training data through it.

\n\n
 #!/usr/bin/env python\n\n import tensorflow as tf\n import numpy as np\n from numpy import genfromtxt\n import sklearn\n\n # Convert to one hot\n def convertOneHot(data):\n     y=np.array([int(i[0]) for i in data])\n     y_onehot=[0]*len(y)\n     for i,j in enumerate(y):\n         y_onehot[i]=[0]*(y.max() + 1)\n         y_onehot[i][j]=1\n     return (y,y_onehot)\n\n\n data = genfromtxt('cs-training.csv',delimiter=',')  # Training data\n test_data = genfromtxt('cs-test.csv',delimiter=',')  # Test data\n\n x_train=np.array([ i[1::] for i in data])\n y_train,y_train_onehot = convertOneHot(data)\n\n x_test=np.array([ i[1::] for i in test_data])\n y_test,y_test_onehot = convertOneHot(test_data)\n\n A=data.shape[1]-1 # Number of features, Note first is y\n B=len(y_train_onehot[0])\n tf_in = tf.placeholder(\"float\", [None, A]) # Features\n tf_weight = tf.Variable(tf.zeros([A,B]))\n tf_bias = tf.Variable(tf.zeros([B]))\n tf_softmax = tf.nn.softmax(tf.matmul(tf_in,tf_weight) + tf_bias)\n\n # Training via backpropagation\n tf_softmax_correct = tf.placeholder(\"float\", [None,B])\n tf_cross_entropy = -tf.reduce_sum(tf_softmax_correct*tf.log(tf_softmax))\n\n # Train using tf.train.GradientDescentOptimizer\n tf_train_step = tf.train.GradientDescentOptimizer(0.01).minimize(tf_cross_entropy)\n\n # Add accuracy checking nodes\n tf_correct_prediction = tf.equal(tf.argmax(tf_softmax,1), tf.argmax(tf_softmax_correct,1))\n tf_accuracy = tf.reduce_mean(tf.cast(tf_correct_prediction, \"float\"))\n\n saver = tf.train.Saver([tf_weight,tf_bias])\n\n # Initialize and run\n init = tf.initialize_all_variables()\n sess = tf.Session()\n sess.run(init)\n\n TMPDir='./tensorTMP'\n saver.restore(sess, TMPDir + '/savedSess')\n ans = sess.run(tf_softmax, feed_dict={tf_in: x_test, tf_softmax_correct: y_test_onehot})\n\n print ans\n
\n\n

Note, your output will look like the following...

\n\n
[[  6.17585704e-02   8.63590300e-01   7.46511072e-02]\n[  9.98804331e-01   1.19561062e-03   3.25832108e-13]\n[  1.52018686e-07   4.49650863e-04   9.99550164e-01]\n
\n", + "system": "" + }, + { + "instruction": "How to retrieve the execution ordering of fetches?", + "input": "", + "output": "

A reasonable solution is to do re-compute the topological sort in python. Looks like the C++ implementation isn't exposed in the python API. Please let me know if there are situations where this wouldn't work.

\n\n

Here's an example:

\n\n
import tensorflow as tf\nfrom toposort import toposort\n\n\nsess = tf.InteractiveSession()\n\nmatrix1=tf.constant([[3., 3.]])\nmatrix2=tf.constant([[2.], [2.]])\n\nsum = tf.add(matrix1, matrix2)\nproduct = tf.matmul(matrix1, matrix2)\nfinal = tf.mul(sum, product)\n\ng = sess.graph\ndeps = {}\n\nfor op in g.get_operations():\n    # op node\n    op_inputs = set()\n    op_inputs.update([t.name for t in op.inputs])\n    deps[op.name] = op_inputs\n\n    # tensor output node\n    for t in op.outputs:\n        deps[t.name]={op.name}\n
\n\n
\n\n
deps\n{u'Add': {u'Const:0', u'Const_1:0'},\n u'Add:0': {u'Add'},\n u'Const': set(),\n u'Const:0': {u'Const'},\n u'Const_1': set(),\n u'Const_1:0': {u'Const_1'},\n u'MatMul': {u'Const:0', u'Const_1:0'},\n u'MatMul:0': {u'MatMul'},\n u'Mul': {u'Add:0', u'MatMul:0'},\n u'Mul:0': {u'Mul'}}\n\nlist(toposort(deps))\n[{u'Const', u'Const_1'},\n {u'Const:0', u'Const_1:0'},\n {u'Add', u'MatMul'},\n {u'Add:0', u'MatMul:0'},\n {u'Mul'},\n {u'Mul:0'}]\n
\n\n

Subsequently, we can manually step through evaluation of each node in the graph - subsequent calls to Session.run() involve passing in a feed_dict that accumulates the results of all previous inputs. This is pretty slow because of the constant shuffling between C++ and numpy data, and memory-intensive because we are caching output values of everything.

\n", + "system": "" + }, + { + "instruction": "not able to install tensorflow on server", + "input": "", + "output": "

Try building from source instead of using the precompiled binary version. That way you won't have to worry about the glibc incompatibility with the installed system one.

\n", + "system": "" + }, + { + "instruction": "Seq2Seq for prediction of complex states", + "input": "", + "output": "

May I suggest a rephrasing and splitting of your question into two parts? The first is really a general machine learning/LSTM question that's independent of tensorflow: How to use an LSTM to predict when the sequence elements are general vectors, and the second is how to represent this in tensorflow. For the former - there's nothing really magical to do there.

\n\n

But a very quick answer: You've really just skipped the embedding lookup part of seq2seq. You can feed dense tensors in to a suitably modified version of it -- your state is just a dense vector representation of the state. That's the same thing that comes out of an embedding lookup.

\n\n

The vector representation tutorial discusses the preprocessing that turns, e.g., words into embeddings for use in later parts of the learning pipeline.

\n\n

If you look at line 139 of seq2seq.py you'll see that the embedding_rnn_decoder takes in a 1D batch of things to decide (the dimension is elements in the batch), but then uses the embedding lookup to turn it into a batch_size * cell.input_size tensor. You want to directly input a batch_size * cell.input_size tensor into the RNN, skipping the embedding step.

\n", + "system": "" + }, + { + "instruction": "Starting TensorFlow on Docker on Google Cloud", + "input": "", + "output": "

The docker run -it command brings up a bash shell in a container where TensorFlow is installed. Once you are at the root@2e87064f0743:/# prompt you can start an interactive TensorFlow session by starting ipython as the following example shows:

\n\n
$ docker run -it b.gcr.io/tensorflow/tensorflow\n\nroot@2e87064f0743:/# ipython\n\nPython 2.7.6 ...\nIn [1]: import tensorflow as tf\nIn [2]: c = tf.constant(5.0)\nIn [3]: sess = tf.InteractiveSession()\nI tensorflow/core/...\n\nIn [4]: c.eval()\nOut[4]: 5.0\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow: data source can be dynamic?", + "input": "", + "output": "

The easiest way to provide data from a dynamic source is to put one or more tf.placeholder() ops in your graph to represent the input tensors, and use the feed mechanism to supply different values for those tensors each time you call run(). If you do this, it is possible to write arbitrary Python code to generate the input data, which could involve invoking a remote server - or even on the server side handling an incoming request.

\n\n

If you're doing this in C++, the tensorflow::Session class offers the same ability to feed placeholder values, using the Session::Run() method. The fed values must be tensorflow::Tensor objects, which you can create by specifying a datatype and shape (list of dimensions). The Tensor objects have methods that allow you to access them as multi-dimensional arrays (like Tensor::scalar<T>(), Tensor::matrix<T>(), and Tensor::tensor<T, NDIMS>()) which allows you to fill in their values as follows:

\n\n
tensorflow::Tensor t(DT_FLOAT, tensorflow::TensorShape({2, 2}));\nt.matrix()(0, 0) = 1.0;\nt.matrix()(0, 1) = 0.0;\nt.matrix()(1, 0) = 0.0;\nt.matrix()(1, 1) = 1.0;\n
\n\n

You can also use all the methods of the Eigen Tensor library to build these values.

\n", + "system": "" + }, + { + "instruction": "Graph visualisaton is not showing in tensorboard for seq2seq model", + "input": "", + "output": "

It looks like this might be related to a bug where the graph visualization does not work in the firefox browser. Try using chrome or safari if possible.

\n\n

https://github.com/tensorflow/tensorflow/issues/650

\n", + "system": "" + }, + { + "instruction": "error while merging summaries for tensorboard", + "input": "", + "output": "\n\n

I had the same error.

\n\n

In my case, adding at least one tf.scalar_summary() before calling tf.merge_all_summaries() solved the problem.

\n\n

For example,

\n\n
cross_entropy = -tf.reduce_sum(y_*tf.log(y))\ntf.scalar_summary(\"cross_entropy\", cross_entropy)\nmerged_summary_op = tf.merge_all_summaries()\n
\n\n

I hope this snippet helps you.

\n", + "system": "" + }, + { + "instruction": "How to interpret TensorFlow's convolution filter and striding parameters?", + "input": "", + "output": "

According to issue #196, this part of the documentation is apparently wrong; and I think there is still problem in dga's answer.

\n\n

It should be:

\n\n

floor((in_height+y_padding-filter_height)/y_stride) + 1,

\n\n\n", + "system": "" + }, + { + "instruction": "TensorFlow installation error on Mac", + "input": "", + "output": "

I got it to install by running the following for a virtualenv setup recommended by the documentation

\n\n

Install python

\n\n
brew install python\n
\n\n

Install tools

\n\n
# On Mac:\n$ sudo easy_install pip  # If pip is not already installed\n$ sudo pip install --upgrade virtualenv\n
\n\n

Setup virtualenv in ~/tensorflow directory

\n\n
$ virtualenv --system-site-packages ~/tensorflow\n$ cd ~/tensorflow\n
\n\n

Activate virtualenv

\n\n
$ source bin/activate  # If using bash\n$ source bin/activate.csh  # If using csh\n(tensorflow)$  # Your prompt should change\n
\n\n

Install tensorflow (this is where I got the SSL error)

\n\n

Download tensorflow from google getting the latest version using curl

\n\n
curl -O https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl\n
\n\n

Install the package you downloaded

\n\n
pip install tensorflow-0.5.0-py2-none-any.whl\n
\n\n

You should be able to type python and try out tensorflow in a python terminal

\n", + "system": "" + }, + { + "instruction": "How to specify number of GPUs in Python interface?", + "input": "", + "output": "

A crude way to restrict it to use just GPU #0, for example, is to define this:

\n\n
export CUDA_VISIBLE_DEVICES=0\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow - Does data adjancency matter ? - MNIST example", + "input": "", + "output": "

Depends on which mnist example you're looking at. convolutional.py runs a 5x5 spatial convolutional window across the image, which does take into account spatial correlation.

\n\n

The MNIST for beginners example that uses a simple weight matrix:

\n\n
W = tf.Variable(tf.zeros([784,10]))\nb = tf.Variable(tf.zeros([10]))\n
\n\n

does not. You could permute the order of entries in the points and not change anything, as long as you permute all inputs the same way.

\n\n

(There's a reason that convolutional approaches are winning for most image recognition applications -- spatial locality is useful. :)

\n", + "system": "" + }, + { + "instruction": "Installing TensorFlow in a Virtualenv on OSX", + "input": "", + "output": "

Solved.

\n\n

It's now installed and up and running.

\n\n

<$url_to_binary.whl> should be replaced by https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl

\n\n

I assume that was the file they meant.

\n", + "system": "" + }, + { + "instruction": "Are AlexNet weights available for Tensorflow?", + "input": "", + "output": "

I don't know of an existing one, but someone wrote a converter to import Caffe models into tensorflow, and you can find pre-trained Alexnet models for Caffe (also see the BVLC Model-Zoo). I can't promise it will work, but you can quite possibly glue those two together to get what you want.

\n", + "system": "" + }, + { + "instruction": "Can't understand shape(output) = (shape(value) - ksize + 1) / strides in TensorFlow docs", + "input": "", + "output": "

I think I figured out the problem. The two formulas become equivalent if you assume that the appropriate padding is already included in shape(value). But I still think the definition of the padding types have been swapped in the documentation. I created an issue to report that: https://github.com/tensorflow/tensorflow/issues/196

\n", + "system": "" + }, + { + "instruction": "TensorBoard doesn't find scalar statistics", + "input": "", + "output": "

Try to call tensorboard with an absolute logdir path.\nFor me it helped at least to see the graph. Unfortunately summaries still are not shown.

\n", + "system": "" + }, + { + "instruction": ""No such file or directory" when starting convolutional.py script on tensorflow docker image", + "input": "", + "output": "

It looks like the error message is caused by trying to execute a script file (.../convolutional.py) that is inside the container, using the Python interpreter outside the container.

\n\n

First of all, follow the steps here to ensure that Docker is configured successfully on your Windows machine:

\n\n

http://docs.docker.com/engine/installation/windows/#using-docker-from-windows-command-prompt-cmd-exe

\n\n

Once you've successfully run the hello-world container, run the following command to start the TensorFlow container:

\n\n
docker run -it b.gcr.io/tensorflow/tensorflow\n
\n\n

(Note that, depending on your terminal, the previous step may or may not work. A common error is cannot enable tty mode on non tty input. In that case, run the following command to connect to the VM that is hosting the containers:

\n\n
docker-machine ssh default\n
\n\n

...then at the resulting prompt, the docker run command again.)

\n\n

At the resulting prompt, you should be able to run the script with the following command:

\n\n
python /usr/local/lib/python2.7/dist-packages/tensorflow/models/image/mnist/convolutional.py\n
\n", + "system": "" + }, + { + "instruction": "Any status update, on an iOS example for Tensorflow?", + "input": "", + "output": "

No updates yet, but rest assured it's not forgotten or being ignored.

\n\n

(Update 2016-01-01: This answer is still current; it's still being worked on.)

\n", + "system": "" + }, + { + "instruction": "Initialize string producer from CSV", + "input": "", + "output": "

This is an error due to the incompatible shapes of img1_path (a 0-D, or scalar, Tensor) and the expected string_tensor argument to tf.train.string_input_producer (a 1-D, or vector, Tensor).

\n\n

Fortunately, the solution is simple, using tf.expand_dims to convert img1_path to a (one-element) vector:

\n\n
# Convert img1_path from a scalar to a 1-element vector.\nimg1_path = tf.expand_dims(img1_path, 0)\n
\n\n

Keeping track of the shapes of tensors can be tricky, so TensorFlow provides the Tensor.get_shape() method, which you can call to get information about the shape of any tensor object. For example:

\n\n
print img1_path.get_shape()\n# ==> \"TensorShape([])\" (a 0-dimensional tensor, or scalar)\nimg1_path = tf.expand_dims(img1_path, 0)\nprint img1_path.get_shape()\n# ==> \"TensorShape([Dimension(1)])\" (a 1-dimensional tensor with length 1)\n
\n", + "system": "" + }, + { + "instruction": "Building TensorFlow: libpython2.7.so.1.0: cannot open shared object file", + "input": "", + "output": "

Would you be able to build the non-Cuda version of the TensorFlow? It would tell us whether it is your python environment that is somehow broken for TensorFlow.

\n", + "system": "" + }, + { + "instruction": "Tensorflow multi-class ML model issues", + "input": "", + "output": "

If you just want something up and running quickly for Kaggle competition, I would suggest you trying out examples in TFLearn first. There's embedding_ops for one-hot, examples for early stopping, custom decay, and more importantly, the multi-class classification/regression you are encountering. Once you are more familiar with TensorFlow it would be fairly easy for you to insert TensorFlow code to build a custom model you want (there are also examples for this).

\n", + "system": "" + }, + { + "instruction": "TensorFlow Indices are not valid (out of bounds)", + "input": "", + "output": "

I had this error as well. And I realized my mistake. If you have 10 classes, your label values should range between 0-9, inclusive. The error was reproduced on the TensorFlow CIFAR10 example, used with SVHN dataset. Refer to question and answer below.

\n\n

TensorFlow CIFAR10 Example

\n", + "system": "" + }, + { + "instruction": "tensorflow and bazel "undefined reference to" compilation error", + "input": "", + "output": "

Bazel requires all dependencies to be declared, so the TensorFlow library should be in your deps attribute. It looks like it is not the case in your target (especially the flag for tensorflow includes is out of place).

\n\n

After a quick look at TensorFlow build file I would say it needs the following deps attribute:

\n\n
 deps = [\n     \"//tensorflow/cc:cc_ops\",\n     \"//tensorflow/core:kernels\",\n     \"//tensorflow/core:tensorflow\",\n ],\n
\n\n

But I am really unfamiliar with TensorFlow itself.

\n\n

What's the deps attribute of your cc_binary?

\n", + "system": "" + }, + { + "instruction": "TensorFlow Tutorial on Sequence-to-Sequence Models: Running translate.py return CRC check failure", + "input": "", + "output": "

Try to remove files in dir /tmp:

\n\n\n\n

And then rerun \nbazel run -c opt <...>/models/rnn/translate/translate.py --data_dir [your_data_directory]

\n", + "system": "" + }, + { + "instruction": "Tensorflow multi-class ML model issues", + "input": "", + "output": "

If you just want something up and running quickly for Kaggle competition, I would suggest you trying out examples in TFLearn first. There's embedding_ops for one-hot, examples for early stopping, custom decay, and more importantly, the multi-class classification/regression you are encountering. Once you are more familiar with TensorFlow it would be fairly easy for you to insert TensorFlow code to build a custom model you want (there are also examples for this).

\n", + "system": "" + }, + { + "instruction": "Tensor Reshape No-op in RNN example", + "input": "", + "output": "

The semantics of reshape are similar to the one from numpy:\nhttp://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html

\n\n

It changes the tensor to have 2-dimensions and the second dimension should have self.state_size elements. E.g. if my tensor has 6 elements and I reshape it to [-1, 2], then the first dimension will have 6 / 2 = 3 elements.

\n", + "system": "" + }, + { + "instruction": "Building a graph with custom ops in tensorflow", + "input": "", + "output": "

After adding a user-defined op (in TensorFlow 0.6.0 or earlier), to use it in the Python interpreter you must reinstall from the source repository. The easiest way to do this is to build and install a PIP package using Bazel. (The unit test would pass because running bazel test would cause TensorFlow to be rebuilt, and the rebuilt version to be used when running the tests.)

\n\n

NOTE: This feature is experimental, and an improved workflow for adding user-defined ops is in development.

\n", + "system": "" + }, + { + "instruction": "Error running label_image with output_layer=pool_3", + "input": "", + "output": "

The issue here is that the TensorFlow code in the image recognition tutorial requires additional modification before the --output_layer=pool_3 option will work:

\n\n
\n

One can specify this by setting --output_layer=pool_3 in the C++ API example and then changing the output tensor handling.

\n
\n\n

To change the output tensor handling, you will need to modify the code below this line in label_image/main.cc. The PrintTopLabels() function calls GetTopLabels(), which takes a single 2-D (batch x classes) tensor—assumed to be the output of a tf.nn.softmax() containing a probability distribution for the labels in a batch of images—and builds a small TensorFlow graph using the tf.nn.top_k() op . The pool_3 layer outputs a four-dimensional (batch x height x width x depth) tensor, which will require additional processing.

\n\n

The additional processing has been left as an exercise for the reader. However, there are some things you could try:

\n\n\n\n

N.B. Since the documentation is better, I've provided links to the Python documentation, rather than the corresponding C++ API. You might find it (much!) easier to modify the Python code for Inception inference instead.

\n", + "system": "" + }, + { + "instruction": "Changing Tensorflow MNIST code with interactive session into session", + "input": "", + "output": "

There are just two things you need to change to make this work:

\n\n
    \n
  1. Initialise the variables before running the first training steps.

    \n\n
    init_op = tf.initialize_all_variables()\ninit_op.run()\nfor i in range(1000):\n    # \u2026\n
    \n\n

    This will fix the first error you are seeing, and is an important first step in any TensorFlow program that uses variables.

  2. \n
  3. Inline the bodies of train() and eval() in the with tf.Session() as sess: block. Your eval() function uses local variables from train() so the code is not valid Python as written. (Note that the sess.run() around train() and eval() is incorrect too\u2014those functions don't have a return value, so this is equivalent to calling sess.run(None), which will raise an error.)

  4. \n
\n", + "system": "" + }, + { + "instruction": "TensorFlow - optimization with normalization constraints", + "input": "", + "output": "

You can add a soft constraint to your loss: some_constant * (norm(w)- 1)^2 but, as far as I know, there are no functionalities specifically for constrained optimization.

\n", + "system": "" + }, + { + "instruction": "tensorflow: ValueError: setting an array element with a sequence", + "input": "", + "output": "

This—not very helpful—error is raised when one of the values in the feed_dict argument to tf.Session.run() is a tf.Tensor object (in this case, the result of tf.reshape()).

\n\n

The values in feed_dict must be numpy arrays, or some value x that can be implicitly converted to a numpy array using numpy.array(x). tf.Tensor objects cannot be implicitly converted, because doing so might require a lot of work: instead you have to call sess.run(t) to convert a tensor t to a numpy array.

\n\n

As you noticed in your answer, using np.reshape(_y_, [-1, 1]) works, because it produces a numpy array (and because _y_ is a numpy array to begin with). In general, you should always prepare data to be fed using numpy and other pure-Python operations.

\n", + "system": "" + }, + { + "instruction": "Cannot import name random/multiarray in conda environment", + "input": "", + "output": "

There is an answer here.

\n\n

Shortly: that issue has something with the version of numpy which is upgraded by another package by whatever reason. Try to specify version: conda create -n NAME numpy=1.9.3 other_package.

\n\n

If that doesn't work, check if you have files in your working directory which names matches the names of some packages. For example, I had a similar problem after renaming numpy.py.txt (which is a sort of handmade cheatsheet) into just numpy.py and trying to import numpy within Python shell when I was in that directory.

\n", + "system": "" + }, + { + "instruction": "Tensorflow seq2seq weight sharing", + "input": "", + "output": "

Both functions operate on the same default graph and so can reuse the variables, check out variable scopes tutorial and see if your variables are created with reuse=True parameter

\n\n

As a sanity check, try following snippet to list all variables in the default graph:

\n\n
[v.name for v in tf.get_default_graph().as_graph_def().node if v.op=='Variable']\n
\n", + "system": "" + }, + { + "instruction": "How to find duplicated elements in a 1D Tensor", + "input": "", + "output": "

You can do it using existing Tensorflow operations in a slightly round-about way, by counting the unique items to create a dense set of indexes of the unique items, and then counting them using tf.unsorted_segment_sum. Once you have the count, select the items with > N using tf.greater, and gather them back into a dense list:

\n\n
import tensorflow as tf\n\na = tf.constant([8, 7, 8, 1, 3, 4, 5, 9, 5, 0, 5])\ninit = tf.initialize_all_variables()\n\nunique_a_vals, unique_idx = tf.unique(a)\ncount_a_unique = tf.unsorted_segment_sum(tf.ones_like(a),                   \n                                         unique_idx,                        \n                                         tf.shape(a)[0])                    \n\nmore_than_one = tf.greater(count_a_unique, 1)                               \nmore_than_one_idx = tf.squeeze(tf.where(more_than_one))                     \nmore_than_one_vals = tf.squeeze(tf.gather(unique_a_vals, more_than_one_idx))\n\n# If you want the original indexes:                                         \nnot_duplicated, _ = tf.listdiff(a, more_than_one_vals)                      \ndups_in_a, indexes_in_a = tf.listdiff(a, not_duplicated)                    \n\nwith tf.Session() as s:                                                     \n    s.run(init)                                                             \n    a, dupvals, dupidxes, dia = s.run([a, more_than_one_vals,                    \n                                  indexes_in_a, dups_in_a])                            \n    print \"Input: \", a                                                      \n    print \"Duplicate values: \", dupvals                                     \n    print \"Indexes of duplicates in a: \", dupidxes\n    print \"Dup vals with dups: \", dia\n
\n\n
\n

Input: [8 7 8 1 3 4 5 9 5 0 5]

\n \n

Duplicate values: [8 5]

\n \n

Indexes of duplicates in a: [ 0 2 6 8 10]

\n \n

Dup vals with dups: [8 8 5 5 5]

\n
\n", + "system": "" + }, + { + "instruction": "Location of tensorflow/models.. in Windows", + "input": "", + "output": "

If you're using one of the devel tags (:latest-devel or :latest-devel-gpu), the file should be in /tensorflow/tensorflow/models/image/imagenet/classify_image.py.

\n\n

If you're using the base container (b.gcr.io/tensorflow/tensorflow:latest), it's not included -- that image just has the binary installed, not a full source distribution, and classify_image.py isn't included in the binary distribution.

\n", + "system": "" + }, + { + "instruction": "Training from CSV file - use every train example per epoch", + "input": "", + "output": "

The queue should block at the end of the epoch. When that happens, you will know that you have exhausted the training set. More information in this related question: Tensor Flow shuffle_batch() blocks at end of epoch

\n", + "system": "" + }, + { + "instruction": "Tensorflow : one hot encoding", + "input": "", + "output": "

One way to achieve it is to compute max on each row and then compare each element to that value. I don't have tensor flow installed on this machine, so can't provide you with the exact code, but it will be along the lines of this:

\n\n
z1 = tf.equal(t, tf.reduce_max(t, reduction_indices=[1], keep_dims=True))\n
\n", + "system": "" + }, + { + "instruction": "C++ - Deleting an object before returning it", + "input": "", + "output": "

The constructor of OpDef sets the initialized member to false. However, the function object passed to Register is only called when this member is true. That is, the registration doesn't call the function deleteing the object.

\n\n

Supposedly, the function gets executed later at which point the object gets cleaned up. This partice looks somewhat questionable to me but without digging a lot deeper there doesn't seem to be an obvious error.

\n", + "system": "" + }, + { + "instruction": "Caffe and TensorFlow Protobuf - Maintain different Version", + "input": "", + "output": "

Note that your virtualenv doesn't see packages from outside, so you should not have protobuf visible in your virtualenv at all, even though you have it installed globally in the system (or in the Caffe's virtualenv, depending on your setup).

\n\n

It should be safe for you to run pip install protobuf inside the TensorFlow's virtualenv, it will not disrupt your global setup or any other virtualenv you have.

\n", + "system": "" + }, + { + "instruction": "Get original value of Tensor in Tensorflow", + "input": "", + "output": "

The simplest way to see the values of your tensors is to create a tf.Session and use Session.run to evaluate the tensors. Thus, your code would look like:

\n\n
import tensorflow as tf\n\nC_int = tf.constant(134)\nV_mat = tf.Variable(tf.zeros([2,3]))\nC_str = tf.constant(\"ABC\")\n\nsess = tf.Session()\nsess.run(C_int)\n#134\nsess.run(V_mat)\n#array([[ 0.,  0.,  0.],\n#       [ 0.,  0.,  0.]])\nsess.run(C_str)\n#ABC\n\nsess.close()\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow Image Recognition tutorial seemingly out of date?", + "input": "", + "output": "

The 'classify_image.py' isn't in the 0.5.0 released version which you have installed. But you can find the file you want here. Actually I think this is because the tutorial is up-to-date and the file is just uploaded to github two days ago. Maybe install from github is a better option for you because it is always up to date.

\n", + "system": "" + }, + { + "instruction": "How to represent a linear data in TensorFlow", + "input": "", + "output": "

Recurrent neural networks (RNNs) are a possible representation for sequential data like a stream of energy pulses. The TensorFlow website has a tutorial on building an RNN for predicting the next word in a sentence of words, but this could possibly be adapted to predicting the next value in your scenario.

\n", + "system": "" + }, + { + "instruction": "tutorials_example_trainer fails in debug mode (-c dbg)", + "input": "", + "output": "

You can workaround the problem by editing

\n\n

tensorflow/third_party/eigen3/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceType.h

\n\n

and commenting out the following 2 lines of code:

\n\n

static tensorflow::mutex m_devicePropInitMutex(tensorflow::LINKER_INITIALIZED);

\n\n

and

\n\n

tensorflow::mutex_lock l(m_devicePropInitMutex);

\n\n

I'll push a proper fix to the tensorflow repository shortly.

\n", + "system": "" + }, + { + "instruction": "Does Tensorflow rerun for each eval() call?", + "input": "", + "output": "

When you call Tensor.eval(), TensorFlow (i) works out what subgraph of the whole graph it needs to run to produce the value of that tensor, then (ii) runs that entire graph.

\n\n

It is often more efficient to use Session.run() to fetch the values of multiple tensors at once. For example, you could rewrite your code as follows to run the graph once:

\n\n
with tf.Session() as sess:\n    val1, val2 = sess.run([var1, var2], {x:myInputs, y:myOutputs})\n    print \"var1 =\", val1\n    print \"var2 =\", val2\n
\n", + "system": "" + }, + { + "instruction": "Tesnor Flow unsupported opperand", + "input": "", + "output": "

Are you running with the released version of tensorflow and using a post-release model, by chance? This sounds a lot like Github issue 293. My suggestion would be to either: (a) Update your install; (b) Try removing the from __future__ import division from the top of the file; or (c) changing the line to invoke the underlying log_perps = tf.div(log_perps, total_size) function directly.

\n\n

(b) or (c) is the fastest fix, but in the long run, I'd go with (a).

\n", + "system": "" + }, + { + "instruction": "Is it possible to plot more than two scalars in the same plot in tensorboard?", + "input": "", + "output": "

The github documentation for TensorBoard says that it only overlaps plots from different runs with the same tag name, so this won't work. If you'd like this feature please file a bug.

\n", + "system": "" + }, + { + "instruction": "Why some example in tensor flow tutorial need to be compiled?", + "input": "", + "output": "

Because Python calls into C code using SWIG and that needs to be compiled

\n", + "system": "" + }, + { + "instruction": "tensorflow rnn model path", + "input": "", + "output": "

The demo code probably did not include the ability to save a model; you might want to explicitly use tf.train.Saver to save and restore variables to and from checkpoints.

\n\n

See doc and examples.

\n\n

It's pretty straightforward according to the doc. In below example, I saved\nall the variables in the model. Instead, you can choose which variable(s) to save by following the examples.

\n\n
# ... \ntf.initialize_all_variables().run()\n####################################################\n# Add ops to save and restore all the variables.\n####################################################\nsaver = tf.train.Saver()\n\nfor i in range(config.max_max_epoch):\n  lr_decay = config.lr_decay ** max(i - config.max_epoch, 0.0)\n  m.assign_lr(session, config.learning_rate * lr_decay)\n\n  print(\"Epoch: %d Learning rate: %.3f\" % (i + 1, session.run(m.lr)))\n  train_perplexity = run_epoch(session, m, train_data, m.train_op,\n                               verbose=True)\n  print(\"Epoch: %d Train Perplexity: %.3f\" % (i + 1, train_perplexity))\n  valid_perplexity = run_epoch(session, mvalid, valid_data, tf.no_op())\n  print(\"Epoch: %d Valid Perplexity: %.3f\" % (i + 1, valid_perplexity))\n\n  ####################################################\n  # Save the variables to disk.\n  ####################################################\n  save_path = saver.save(session, \"/tmp/model.epoch.%03d.ckpt\" % (i + 1))\n  print(\"Model saved in file: %s\" % save_path)\n  # ....\n
\n\n

In my case, each checkpoint file has a disk size of 18.61M (--model small).

\n\n

Regarding how to use the model, just follow the doc to restore the checkpoints from saved files. Then it's at your will how to use it.

\n", + "system": "" + }, + { + "instruction": "Why does this variant of the XOR function not always converge?", + "input": "", + "output": "

The way you're computing your error is letting your network fall into local minima too easily. I suspect it's because the l1 norm of the xor function has too many equal-weight poor solutions when moving from an existing solution. (But I'm not positive - an ML expert can give you a more precise answer here. I'm just a systems schmoo.)

\n\n

Easy fix: replace these lines:

\n\n
cross_entropy = tf.abs(tf.sub(y_input, y))\nloss = tf.reduce_mean(cross_entropy)\n
\n\n

with:

\n\n
loss = tf.nn.l2_loss(y_input - y)\n
\n", + "system": "" + }, + { + "instruction": "Counter on GPU insanely slow compared to default counter?", + "input": "", + "output": "

In some sense, both programs are \"surprisingly\" slow, compared to the number of instructions that must be executed. The single-element counter is performing 200,000 increment instructions, using 200,000 calls to sess.run(), in 14.4 seconds. The vector counter is performing 100,000,000 increment instructions, using 10,000 calls to sess.run(), in 0.99 seconds. If you wrote these programs in C, you would expect to find that each counter increment takes a few nanoseconds at most, so where is the time being spent?

\n\n

TensorFlow imposes some per-step overhead, on the order of a few microseconds per call to Session.run(). This is a known issue, and it is something the team is trying to reduce, but it is rarely a concern for most of the neural network algorithms that one would typically run in a single step. The overhead can be broken down as follows:

\n\n\n\n

There are a few things you could try that should speed up both versions of the code.

\n\n\n", + "system": "" + }, + { + "instruction": "python - tensorflow execution on Ubuntu server with CUDA GeForce9600GT", + "input": "", + "output": "

TensorFlow currently requires CUDA toolkit 7.0 and cuDNN.

\n

cuDNN requires a cc 3.0 GPU, and CUDA toolkit 7.0 requires a cc 2.0 GPU.

\n

Your 9600GT does not satisfy these requirements:

\n
\n

In order to build or run TensorFlow with GPU support, both Cuda Toolkit 7.0 and CUDNN 6.5 V2 from NVIDIA need to be installed.

\n

TensorFlow GPU support requires having a GPU card with NVidia Compute Capability >= 3.5.

\n
\n

So if you want to use TensorFlow with GPU support, you will need a cc3.5 or higher GPU, and follow the steps to install the needed support software correctly. Alternatively, you could install TensorFlow without GPU support.

\n", + "system": "" + }, + { + "instruction": "undefined symbol: PyUnicodeUCS4_FromStringAndSize with tensorflow on heroku", + "input": "", + "output": "

This issue is caused by your Python binary having an incompatible Unicode definition (UCS2) from the one assumed by the TensorFlow binary (UCS4).

\n\n

At present, the best solution is to build TensorFlow from source. The latest version does not use Unicode strings in the native extension, so this problem should not arise. When updated binaries are available, they will include this fix.

\n\n

If you can't build from source on the target machine (for example when using Heroku), one option is to build your own PIP package on a different machine (with the same architecture), and add that to your requirements.txt instead.

\n", + "system": "" + }, + { + "instruction": "Recovering probability distribution from binary observations - what are the reasons for the defects of this implementation?", + "input": "", + "output": "

It's over-fitting on the left and under-fitting on the right.

\n\n

Because of the small random biases your hidden units all get near zero activation near x=0, and because of the asymetry and large range of the x values, most of the hidden units are saturated out around x = 10.

\n\n

The gradients can't flow through saturated units, so they all get used up to overfit the values they can feel, near zero.

\n\n

I think centering the data on x=0 will help.\nTry reducing the weight-initialization-variance, and/or increasing the bias-initialization-variance (or equivalently, reducing the range of the data to a smaller region, like [-1,1]).

\n\n

You would get the same problem if you used RBF's and initializad them all near zero. with the linear-sigmoid units the second layer is using pairs of linear-sigmoids to make RBF's.

\n", + "system": "" + }, + { + "instruction": "Tensor Flow installation on RHEL 7", + "input": "", + "output": "

I had the same issue. Your six package needs to be updated. Try:

\n\n
easy_install --upgrade six\n
\n\n

Note: I am doing this from within a virtual environment and I do have internet.

\n", + "system": "" + }, + { + "instruction": "TensorFlow IOError: [Errno socket error] [Errno 104] Connection reset by peer", + "input": "", + "output": "

Python was not able to download the MNIST dataset from lecun.com. First, check to make sure you can browse Yann LeCun's MNIST page from that computer. If you can't, it may be a firewall or Internet connectivity issue. If you can, try running the download again in a few minutes - I've seen this error transiently a few times since the release of Tensorflow and it's always gone away within 5 minutes.

\n", + "system": "" + }, + { + "instruction": "Reshaping a Tensor in TensorFlow with scalar tensors", + "input": "", + "output": "

Take a look at this:

\n\n
import tensorflow as tf\n\na, b, c = 2, 3, 4\nx = tf.Variable(tf.random_normal([a, b, c], mean=0.0, stddev=1.0, dtype=tf.float32))\ns = tf.shape(x)\n\ninit = tf.initialize_all_variables()\nsess = tf.Session()\nsess.run(init)\nv1, v2, v3 = sess.run(s)\ny = tf.reshape(x, [v1 * v2, v3])\nshape = tf.shape(y)\n\nprint sess.run(y)\nprint sess.run(shape)\n
\n\n

I am getting the shape of the variable after it's initialization and then use it later. Also take a look at this answer, as it deals with a similar thing.

\n", + "system": "" + }, + { + "instruction": "most efficient way to evaluate tensorflow Tensor on multiple input", + "input": "", + "output": "

I think the best way to do evaluation for multiple values is to put them in one batch and evaluate them in a single call to eval(). In your code you create 100 minibatches of size 200 -- if you want a fast evaluation, why not use a single batch of 100*200? In some cases that might cause memory problems, but I think that would be the first thing to try for speed.

\n", + "system": "" + }, + { + "instruction": "Interpreting Tensorflow/Tensorboard "subtraction" operation", + "input": "", + "output": "

After futzing around with this, my considered answer to my own question is, \"Yes, this is working as intended.\" The inputs to the nodes show only what the inputs are, not any particular relationships to the operation or the node or themselves; indeed, if one added a variable to itself in an operation node, the input variable would show up only once.

\n\n

This is not a design choice I would have made, but that does seem to be the intent.

\n\n

I still encourage others who may have more insight to comment or fully answer.

\n", + "system": "" + }, + { + "instruction": "Not seeing option of uploading graph file in Tensorboard using FireFox", + "input": "", + "output": "

Try using it on chrome instead of Firefox.

\n\n

It isn't working on my Firefox too.

\n\n

See this discussion for more info:
\nIs anyone else having trouble viewing the tensorboard/graph tab in firefox?

\n", + "system": "" + }, + { + "instruction": "TensorFlow no attribute 'make_template'", + "input": "", + "output": "

As noted in the comments, this issue was due to a mismatch between the installed version of TensorFlow (0.5.0) and the downloaded source (0.6.0).

\n\n

To upgrade to the latest development version of TensorFlow, follow the instructions to install from source, then build and install the PIP package based on that source.

\n", + "system": "" + }, + { + "instruction": "Tensor Flow softmax Regression Always Predicts 1", + "input": "", + "output": "

You seem to be predicting a single scalar, rather than a vector. The softmax op produces a vector-valued prediction for each example. This vector must always sum to 1. When the vector only contains one element, that element must always be 1. If you want to use a softmax for this problem, you could use [1, 0] as the output target where you are currently using [0] and use [0, 1] where you are currently using [1]. Another option is you could keep using just one number, but change the output layer to sigmoid instead of softmax, and change the cost function to be the sigmoid-based cost function as well.

\n", + "system": "" + }, + { + "instruction": "Installing tensorflow causes an error on OS X?", + "input": "", + "output": "

If you are having permissions issues or conflicts with other installed libraries, the Virtualenv-based installation is the easiest way to get TensorFlow installed.

\n", + "system": "" + }, + { + "instruction": "How to evaluate lists with predicted lengths? (`tensorflow.nn.top_k` with array of `k`s from another model)", + "input": "", + "output": "

The length of the predicted list is indeed not differentiable. You need to add an extra softmax output to the model predicting the length of the list, or add many sigmoid outputs predicting which entries should be included.

\n\n

I wrote a paper about transcribing variable-length text sequences from images, and the appendix goes into a lot of detail with a worked example for how the math works:\nhttp://arxiv.org/abs/1312.6082

\n", + "system": "" + }, + { + "instruction": "Error in TensorFlow program", + "input": "", + "output": "

Protobuf error is usually an installation issue , run it in a virtual env

\n\n
# On Mac:\n$ sudo easy_install pip  # If pip is not already installed\n$ sudo pip install --upgrade virtualenv\nNext, set up a new virtualenv environment. To set it up in the directory ~/tensorflow, run:\n$ virtualenv --system-site-packages ~/tensorflow\n$ cd ~/tensorflow\nThen activate the virtualenv:\n$ source bin/activate  # If using bash\n$ source bin/activate.csh  # If using csh\n(tensorflow)$  # Your prompt should change\nInside the virtualenv, install TensorFlow:\n(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl\nYou can then run your TensorFlow program like:\n(tensorflow)$ python tensorflow/models/image/mnist/convolutional.py\n\n# When you are done using TensorFlow:\n(tensorflow)$ deactivate  # Deactivate the virtualenv\n\n$  # Your prompt should change back\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow very slow on second run (Ubuntu)", + "input": "", + "output": "

To make sure it's obvious what happened and that this question is answered: This occurred because tensorflow was reading from /dev/random instead of /dev/urandom. On some systems, /dev/random can exhaust its supply of randomness and block until more is available, causing the slowdown. This has now been fixed in github. The fixes are included in release 0.6.0 and later.

\n", + "system": "" + }, + { + "instruction": "Tensorflow input format for Object Detection", + "input": "", + "output": "

The standard data format that TensorFlow uses is the Example protocol buffer, which has a generic notion of \"Feature\" that should support Caffe-style WindowData. The documentation has some information on this format, and the source code includes an example application for converting image data (the simple MNIST format) to this format, for use with the standard input pipeline.

\n\n

If you follow these steps, you would most likely store the image as a \"bytes\" feature, and add dense integer features, corresponding to the coordinates of the windows and the labels.

\n", + "system": "" + }, + { + "instruction": "Validation and Test Evaluations in TensorFlow Demos", + "input": "", + "output": "

My own anwer: Yes, the code runs to a max number of steps, and fully_connected_feed.py is for the later demos.

\n", + "system": "" + }, + { + "instruction": "TensorFlow XOR code works fine with two dimensional target but not without?", + "input": "", + "output": "

TL;DR: For this to work, you should use

\n\n
loss = tf.nn.l2_loss(logits - y_input)\n
\n\n

...instead of tf.nn.softmax_cross_entropy_with_logits.

\n\n

The tf.nn.softmax_cross_entropy_with_logits operator expects the logits and labels inputs to be a matrix of size batch_size by num_classes. Each row of logits is an unscaled probability distribution across the classes; and each row of labels is a one-hot encoding of the true class for each example in the batch. If the inputs do not match these assumptions, the training process may diverge.

\n\n

In this code, the logits are batch_size by 1, which means that there is only a single class, and the softmax outputs a prediction of class 0 for all of the examples; the labels are not one-hot. If you look at the implementation of the operator, the backprop value for tf.nn.softmax_cross_entropy_with_logits is:

\n\n
// backprop: prob - labels, where\n//   prob = exp(logits - max_logits) / sum(exp(logits - max_logits))\n
\n\n

This will be [[1], [1], [1], [1]] - [[0], [1], [1], [0]] in every step, which clearly does not converge.

\n", + "system": "" + }, + { + "instruction": "tensorflow beginner tutorial - read_data_sets fails", + "input": "", + "output": "

This appears to be an issue with the latest version of Numpy. A recent change made it an error to treat a single-element array as a scalar for the purposes of indexing.

\n\n

I have made the relevant change to the upstream TensorFlow code, but in the meantime you can edit this line in input_data.py (L45) to be the following (adding [0] at the end of the line):

\n\n
return numpy.frombuffer(bytestream.read(4), dtype=dt)[0]\n
\n", + "system": "" + }, + { + "instruction": "Implementing batch normalization with tensorflow", + "input": "", + "output": "

Rafal's comment gets at the core of the problem: You're not running the assign nodes. You might try using the batchnorm helper I posted in another answer - How could I use Batch Normalization in TensorFlow? - or you can force the assign to happen by adding with_dependencies, as he suggests.

\n\n

The general principle is that you should only count on a node being run if data or control dependencies flow \"through\" it. with_dependencies ensures that before the output op is used, the specified dependencies will have completed.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: Cannot import user_ops from my python file", + "input": "", + "output": "

Hmmm. Well, I uninstalled tensorflow and then I reinstalled from what I just built and what I wrote was suddenly recognized. I have seen this behavior twice in a row now where an uninstall is necessary. So to sum, the steps after adding my own op are:

\n\n
$ pip uninstall tensorflow\n$ bazel build -c opt //tensorflow/tools/pip_package:build_pip_package\n\n# To build with GPU support:\n$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package\n\n$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg\n\n# The name of the .whl file will depend on your platform.\n$ pip install /tmp/tensorflow_pkg/tensorflow-0.5.0-cp27-none-linux_x86_64.whl\n
\n", + "system": "" + }, + { + "instruction": "Docker download Google's TensorFlow issue", + "input": "", + "output": "

If you are using docker-machine, you shouldn't have to tinker directly with the docker daemon profile.

\n

Use the --engine-env option when creating your VM instance for docker.
\nSee docker-machine create.

\n

Simply define %HOME%/.bashrc (which will be read when you open your bash session, before doing an ssh to your VM)

\n
alias dm=docker-machine\nexport http_proxy=$HTTP_PROXY\nexport https_proxy=$HTTPS_PROXY\nexport NO_PROXY=$NO_PROXY\nexport no_proxy=$NO_PROXY\n\nalias dmcv='docker-machine create -d virtualbox --engine-env HTTP_PROXY=$http_proxy --engine-env HTTPS_PROXY=$https_proxy --engine-env http_proxy=$http_proxy --engine-env https_proxy=$https_proxy --engine-env NO_PROXY=$no_proxy --engine-env no_proxy=$no_proxy'\n\nalias d=docker\nalias dpsa='docker ps -a'\ndenv() { eval $(docker-machine env "$@"); }\nvbmctr() { eval $(VBoxManage controlvm $1 natpf1 "$1-$2-tcp,tcp,,$2,,$2"); eval $(VBoxManage controlvm $1 natpf1 "$1-$2-udp,udp,,$2,,$2"); }\n
\n

Make sure your htt(s)_proxy are defined with:

\n
http://username:password@proxy-server.com:port\n
\n

(note that it always starts with http:// even for https_proxy)

\n

Also make sure to define no_proxy:

\n
NO_PROXY=.company,.sock,localhost,127.0.0.1,::1,192.168.99.100,192.168.99.101,192.168.99.102,192.168.99.103,192.168.99.104\n
\n

(replace .company by your company extension)

\n

From there, you can do a:

\n
dmcv default\ndenv default\ndm ssh default\n
\n

The key here is the dmcv alias: it will create the VM with a /var/lib/boot2docker/profile already modified for you with proxy.

\n

Note that I always use the upercase and lowercase versions of those proxy variables, in order to be interpreted by different unix commands (like curl, wget, ...) which rely sometime on lowercase, other times on upercase variable names.

\n", + "system": "" + }, + { + "instruction": "Running translate on ipython kills the process", + "input": "", + "output": "

That's not ipython/tensorflow specific.

\n\n

Kills like that can come from the linux kernel if it decides, basically, that a process is using too much memory.

\n\n

Who "Killed" my process and why?

\n", + "system": "" + }, + { + "instruction": "Error downloading Tensorflow MNIST data", + "input": "", + "output": "

The syntax error is raised because input_data.py is in HTML format \u2014 notice the HTML tags in the error message \u2014 presumably because you downloaded the webpage that displays the content of the file from GitHub.

\n\n

Instead, download the raw contents of input_data.py from this link to get a valid Python file: https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/g3doc/tutorials/mnist/input_data.py

\n", + "system": "" + }, + { + "instruction": "Run training verbosely to check status if command", + "input": "", + "output": "

The RNN translation example does not have a specific \"verbose\" flag, but all of its modes produce output on stdout. The default mode is to train a model, which runs indefinitely, producing output (and a model checkpoint) every --steps_per_checkpoint=N steps.

\n\n

It is possible that Bazel is buffering standard output from the process, so you aren't able to see progress being made. Try running the built binary instead:

\n\n
$ bazel build tensorflow/models/rnn/translate:translate -c opt\n$ bazel-bin/tensorflow/models/rnn/translate/translate --data_dir /Users/Username/data/ --train_dir /Users/User/train/  --en_vocab_size=40000 --fr_vocab_size=40000 --size 256 --num_layers 2 --steps_per_checkpoint=50\n
\n", + "system": "" + }, + { + "instruction": "nvidia cuda 7.5 driver in tensorFlow are not properly handled (ubuntu 14.04)", + "input": "", + "output": "

The binaries published by google need to find libcudart.so.7.0 in the path library , you just need to add it to LD_LIBRARY_PATH by something like

\n\n

export LD_LIBRARY_PATH=\"$LD_LIBRARY_PATH:/home/olivier/digits-2.0/lib/cuda\"

\n\n

that you put in your .bashrc

\n", + "system": "" + }, + { + "instruction": "Using Tensorboard to graph from a log directory", + "input": "", + "output": "

You can create a SummaryWriter object, passing it the log directory, and call add_summary to log summaries and events to files in that directory. word2vec.py has an example. You can simply point TensorBoard at the log directory by passing it through --logdir and visualize the summaries.

\n", + "system": "" + }, + { + "instruction": "tensorflow MNIST fully_connected_feed.py fails: range() takes at least 2 arguments (1 given)", + "input": "", + "output": "

This issue arises because in the latest version of the TensorFlow source on GitHub, tf.range() has been updated to be more permissive with its arguments (previously it required two arguments; now it has the same semantics as Python's range() built-in function), and the fully_connected_feed.py example has been updated to exploit this.

\n\n

However, if you try to run this version against the binary distribution of TensorFlow, you will get this error because the change to tf.range() has not been incorporated into the binary package.

\n\n

The easiest solution is to download the old version of mnist.py. Alternatively, you could build from source to use the latest version of the tutorial.

\n", + "system": "" + }, + { + "instruction": "Raindrops Differential Equations Demo", + "input": "", + "output": "

This code in this tutorial was designed for pasting into IPython. If you run it in a notebook the <IPython.core.display.Image object> strings will be replaced with an image of the current state of the simulation.

\n\n

An alternative would be to generate image files that contain the same data. For example, you could replace the following code:

\n\n
f = StringIO()\nPIL.Image.fromarray(a).save(f, fmt)\ndisplay(Image(data=f.getvalue()))\n
\n\n

...with some code that writes the image to the file:

\n\n
with open(\"/tmp/image.jpg\", \"w\") as f:\n  PIL.Image.fromarray(a).save(f, \"jpeg\")\n
\n\n

...and then open the file /tmp/image.jpg in your favorite image viewer.

\n", + "system": "" + }, + { + "instruction": "not able to import tensorflow inside anaconda python 2.7", + "input": "", + "output": "

The TensorFlow binary packages require that the installed version of glibc (the GNU C Library) is at least 2.17. It looks like your VM has an old version of glibc, which is causing this error when you try to load TensorFlow.

\n\n

Since you are using VMWare, can you try creating a VM with a Ubuntu 14.04 image? We have tested with this operating system, and it has the necessary libraries to run TensorFlow.

\n", + "system": "" + }, + { + "instruction": "writing a tf::transform object to a file", + "input": "", + "output": "

Implement the operator
\nI'm not sure of the contents of the transform struct in this case, but assuming it is:

\n\n
struct transform { float mat[16]; }\n
\n\n

Then the implementation can be something like:

\n\n
std::ostream& operator<< (std::ostream& os, const tf::transform& t)\n{\n  os << t.mat[0];\n  for(int i=1;i<16;++i) os << ',' << t.mat[i];\n  return os;\n}\n
\n", + "system": "" + }, + { + "instruction": "Getting pygame running in docker", + "input": "", + "output": "

You can try installing the security extras:

\n\n

https://stackoverflow.com/a/29202163/1703772

\n\n

Some questions:

\n\n\n\n

The fact that it reports no versions of pygame accessible is a red flag.\nYou could try installing using a windows installer: https://bitbucket.org/pygame/pygame/downloads (from https://scicomp.stackexchange.com/questions/2987/what-is-the-simplest-way-to-do-a-user-local-install-of-a-python-package)

\n\n

UPDATE:

\n\n

From my non-exhaustive searching, Windows + Docker + pip to install pygame will not work at this time. Installing using a windows installer seems to be the accepted solution by many.

\n\n

Here are some solutions you can try:

\n\n\n\n

The current pygame on pypi is 1.7.1 and supports up to Windows 2000/NT: https://pypi.python.org/pypi/Pygame/1.7.1? - perhaps your windows version is too new.

\n\n

I hope this has answered your question.

\n", + "system": "" + }, + { + "instruction": "Tensorboard Graph Visualiyation Error using Python 3", + "input": "", + "output": "

tensorboard_handler.py uses StringIO, but then passes bytes to it. When working with Python 3, it should probably use BytesIO instead (which is new in Python 3). This question is related: StringIO in python3.

\n\n

Unfortunately, Tensorboard does not yet support python 3, but if you happen to fix this issue, send a pull request.

\n", + "system": "" + }, + { + "instruction": "TensorFlow failed to pip-install on RedHat 6", + "input": "", + "output": "

Seems like You are missing the cblas libraries:

\n\n
/usr/bin/ld: cannot find -lcblas\n...\n distutils.errors.LinkError: Command \"cc /tmp/tmp72IZmg/tmp/tmp72IZmg/source.o -L/usr/lib64 -lblas -o /tmp/tmp72IZmg/a.out\" failed with exit status 1\n
\n\n

try running yum install blas blas-devel

\n", + "system": "" + }, + { + "instruction": "Should I consider x0, threshold when using a Tensorflow placeholder function?", + "input": "", + "output": "

There are two ways of thinking about x0. Either your input has an extra dimension, which always has 1 in it, and then a linear regression or a fully connected layer in a neural network will be represented as:

\n\n
out = W * in\n
\n\n

where * is matrix-vector multiplication, or, which is more common, to not add that extra dimension, and instead model it as

\n\n
out = W * in + b\n
\n\n

This is, in part, to highlight the difference between W, which is how we \"weight\" the input, and b, which is how much we \"shift\" it (b is called a \"bias\" term). One other reason why this representation is more desirable is because it is common to regularize W, but not b.

\n\n

Now, back to your question, TensorFlow neural network library models fully connected layer in terms of a weight matrix and a bias vector, therefore you do not need to add an extra one to your input vector.

\n\n

If you use low-level Tensor operations instead of the high-level predefined layers, then TensorFlow makes no assumptions about your input, and if you want to model your model in terms of operations on a vector with an extra 1 in it, it is your responsibility to add that 1 to that vector, TensorFlow will not do that for you.

\n", + "system": "" + }, + { + "instruction": "Why am I getting Docker error "C++ compilation of rule '//tensorflow/core:kernels' failed"?", + "input": "", + "output": "

I was having a similar issue with exit code 4 (which I believe is out of memory).

\n\n

The solution for me was to increase docker-machine to use 8GB. Something like:

\n\n

docker-machine create --driver virtualbox --virtualbox-memory 8192 --virtualbox-cpu-count 4 default

\n", + "system": "" + }, + { + "instruction": "Error in importing tensorflow", + "input": "", + "output": "

did you install the tenserflow module in your virtualenv?

\n\n

after activating the virtualenv \ntry to run the relevant pip command to install

\n\n
# Ubuntu/Linux 64-bit, CPU only:\n(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl\n\n# Ubuntu/Linux 64-bit, GPU enabled:\n(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl\n\n# Mac OS X, CPU only:\n(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl\n
\n", + "system": "" + }, + { + "instruction": "Two feedforward propagation needed for a single weight update in TensorFlow?", + "input": "", + "output": "

No need to go through twice. Check out the other examples, such as mnist/convolutional.py:

\n\n
_, l, lr, predictions = s.run(\n          [optimizer, loss, learning_rate, train_prediction],\n          feed_dict=feed_dict)\n
\n\n

You pull both nodes at the same time in run, and get both the training done and the train prediction at the same time. This is the standard way of training.

\n\n

In general, I'd suggest checking out the examples in models/ first. The \"red pill\" and \"blue pill\" examples are meant to be a very gentle introduction to tensorflow, but the examples in models are a bit more real. They're not production, but they're closer to what you'd want to do.

\n", + "system": "" + }, + { + "instruction": "How to run Recurrent Neural Networks sample programs is tensorflow?", + "input": "", + "output": "

To answer the question about the ptb data, you have to download and install the data as described on this page (because we cannot distribute it with our install):

\n\n

http://www.tensorflow.org/tutorials/recurrent/index.html

\n", + "system": "" + }, + { + "instruction": "Compiling tensorflow/models/rnn/translate:translate with local numpy", + "input": "", + "output": "

One possible solution is presented here.

\n\n

Create a link to $HOME/.local/lib/python2.7/site-packages/numpy/core/include/numpy in the tensorflow/third_party dir and edit -Ithird_party to tensorflow/python/build and tensorflow/tensorflow.bzl

\n", + "system": "" + }, + { + "instruction": "tensorflow error on running the seq2seq model", + "input": "", + "output": "

I think this is one of the problems that arises when the previous checkpoint was not saved properly. You can correct it in the following steps.

\n\n

1.You can delete the all checkpoint files and restart the training.:

\n\n
rm checkpoint\nrm translate-ckpt-*\n
\n\n

Now, restart your training again.

\n\n

Alternatively, you can remove the most latest checkpoint and start it from the previous checkpoint.

\n\n

1.Go to the directory and delete the most latest checkpoint, in this case it is:

\n\n
rm translate-ckpt-200\n
\n\n

2.Now edit the checkpoint file. You might see something like

\n\n
model_checkpoint_path: \"data/translate.ckpt-200\"\nall_model_checkpoint_paths: \"data/translate.ckpt-170\"\nall_model_checkpoint_paths: \"data/translate.ckpt-180\"\nall_model_checkpoint_paths: \"data/translate.ckpt-190\"\nall_model_checkpoint_paths: \"data/translate.ckpt-200\"\n
\n\n

3.Remove the last line and set the checkpoint to a previous stage.

\n\n
model_checkpoint_path: \"data/translate.ckpt-190\"\nall_model_checkpoint_paths: \"data/translate.ckpt-170\"\nall_model_checkpoint_paths: \"data/translate.ckpt-180\"\nall_model_checkpoint_paths: \"data/translate.ckpt-190\"\n
\n\n

4.Restart your training.

\n", + "system": "" + }, + { + "instruction": "Running seq2seq model error", + "input": "", + "output": "

It seems there are a lot of mistakes in the Tensorflow tutorial..\nI was able to run it by removing the .py, and adding an extra -- before the options like:

\n\n

bazel run -c opt tensorflow/models/rnn/translate/translate -- --data_dir /home/minsoo/tensorflowrnn/data

\n\n

the directory part should be changed according to your system.

\n", + "system": "" + }, + { + "instruction": "TensorFlow docker dev workflow on mac", + "input": "", + "output": "

You could mount a local directory to the docker container so that you can still use your preferred editor in osx. Here's a command to start the container with a mounted directory and run a command:

\n\n

docker run --name tensorflow --rm -v /Users/me/Code/web/tensorflow_dev:/tensorflow_dev b.gcr.io/tensorflow/tensorflow /bin/sh -c 'cd /tensorflow_dev && python mnist.py'

\n\n

-v will mount the local directory and the -c will run the specified command. So your flow might look like:

\n\n
    \n
  1. Edit python script in your favorite editor
  2. \n
  3. Run the above command to excute your script
  4. \n
\n\n

However, I actually use pycharm so that I can place breakpoints and run the python script interactively within the editor.

\n\n

Hope this helps.

\n", + "system": "" + }, + { + "instruction": "tensorflow is being installed in anaconda/lib/python2.7/site-packages", + "input": "", + "output": "

It seems you already have anaconda installed. anaconda comes with an environment management system, already (see here). Having two systems (virtualenv and anaconda) seems to mess up your python paths.

\n\n

If you would like to use conda environments, then you do not need to install virtualenv. Just create a new conda env for your tensorflow installation.

\n\n

If you still have troubles with your installation, you can have a look at my answer here. It's for Mac OS X, but the steps described should work on any system.

\n", + "system": "" + }, + { + "instruction": "TensorFlow installation on Ubuntu", + "input": "", + "output": "

Try installing Ubuntu 14.04 in VMware and use the same command.

\n", + "system": "" + }, + { + "instruction": "TensorFlow installation on Ubuntu 14.04 LTS", + "input": "", + "output": "

Use super user for installation.

\n\n

$ sudo pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl

\n\n

This worked for me.

\n", + "system": "" + }, + { + "instruction": "Cannot import name NotFittedError on Ubuntu", + "input": "", + "output": "

EDIT: This has now been fixed in skflow. Upgrading to the latest version of skflow will fix the issue.

\n\n

The offending import is in skflow/estimators/base.py:

\n\n
from sklearn.utils.validation import NotFittedError\n
\n\n

It looks like this class was moved in a (relatively) recent commit to scikit-learn. It would probably be easiest to downgrade to a previous version of scikit-learn (e.g. the 0.17 release seems to be compatible). If you're feeling adventurous, you could try editing line 25 of \"build/bdist.linux-x86_64/egg/skflow/estimators/base.py\" to read:

\n\n
from sklean.exceptions import NotFittedError\n
\n", + "system": "" } ] \ No newline at end of file