repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
training code utilize more cpu memory than gpu memory
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source tensorflow version 1 13 1 python version 2 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 gpu model and memory nvidia rtx 2080 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be use tensorflow 1 13 1 with gpu rtx 2080 I be use reinforcement learning to train the robot in a gazebo platform use ro the issue be that it be run in cpu memory rather than gpu memory it take a lot of time to train a robot I attach a screenshot of the nvidia smi which show that it utilize 1 7 gb only of gpu while use 16 gb of ram describe the expect behavior screenshot 2019 06 28 20 53 03 code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
selectv2 and reciprocal in tflite
Bug
system information os platform and distribution ubuntu tensorflow instal from source or binary binary tensorflow version or github sha if from source tf 2 0 beta1 provide the text output from tflite convert 2019 06 28 15 22 19 759933 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation selectv2 2019 06 28 15 22 19 772940 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation reciprocal 2019 06 28 15 22 19 772994 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation selectv2 2019 06 28 15 22 19 774404 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 138 operator 202 array 0 quantize 2019 06 28 15 22 19 776339 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 138 operator 202 array 0 quantize 2019 06 28 15 22 19 782519 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 94 operator 172 array 0 quantize 2019 06 28 15 22 19 784430 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 2 93 operator 170 array 0 quantize 2019 06 28 15 22 19 786303 I tensorflow lite toco graph transformation graph transformation cc 39 before group bidirectional sequence lstm rnn 93 operator 170 array 0 quantize 2019 06 28 15 22 19 787673 I tensorflow lite toco graph transformation graph transformation cc 39 before dequantization graph transformation 93 operator 170 array 0 quantize 2019 06 28 15 22 19 790494 I tensorflow lite toco allocate transient array cc 345 total transient array allocate size 33226752 byte theoretical optimal value 22118464 byte 2019 06 28 15 22 19 791230 w tensorflow lite toco tflite operator cc 2654 op selectv2 be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 06 28 15 22 19 791295 w tensorflow lite toco tflite operator cc 2654 op selectv2 be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 06 28 15 22 19 791385 w tensorflow lite toco tflite operator cc 2654 op selectv2 be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 06 28 15 22 19 791436 w tensorflow lite toco tflite operator cc 2654 op selectv2 be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 06 28 15 22 19 791510 e tensorflow lite toco toco tooling cc 462 we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime and be not recognize by tensorflow if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use ab add concatenation conv 2d div expand dim fully connect great leaky relu maximum mean minimum mul neg pow reshape resize bilinear sub tanh here be a list of operator for which you will need custom implementation selectv2 any other info log at the beginning I try to convert without this magic line converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op and I get almost the same error log but it be additional operation that couldn t be convert I attach the end of that log here be a list of operator for which you will need custom implementation reciprocal selectv2 as far as understand correctly tf where doesn t work and it be relate to selectv2 because when I remove line with tf where x 0 leave right everthing be okay it s strange because tflite doc say that tf where should work also I figure out that reciprocal be link to 1 0 x expression
tensorflowtensorflow
unexpected tf cast behavior between sign and unsigned integer
Bug
it be test on google s own platform system information os platform and distribution linux version 4 14 79 chrome bot swarm cro 634 tensorflow instal from source or binary the platform be provide by google tensorflow version 1 14 0 rc1 also test on my own machine with tf 1 13 1 python version 3 6 8 describe the current behavior type cast tf cast from tf int32 to tf uint32 will make the tensor become 0 describe the expect behavior tf cast should not change the bit representation of value code to reproduce the issue import tensorflow as tf c tf constant 5 6 7 8 9 10 dtype tf int32 d tf constant 5 6 7 8 9 10 dtype tf int32 x tf cast c dtype tf uint32 y tf cast c dtype tf uint32 with tf session as sess x raw y raw sess run x y print x raw dtype y raw dtype print x raw print y raw print tf version run this code give the result uint32 uint32 0 0 0 0 0 0 0 0 0 0 0 0 1 14 0 rc1
tensorflowtensorflow
can not run official tf2 0 example in official tf2 0 docker container
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 18 04 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary docker container tensorflow tensorflow 2 0 0b1 gpu py3 tensorflow version use command below 2 0 0 beta1 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory rtx 2080 ti you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior run the the official example in the official container tensorflow tensorflow 2 0 0b1 gpu py3 get the follow error python advanced py 2019 06 27 17 09 34 195179 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcuda so 1 2019 06 27 17 09 34 231111 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 06 27 17 09 34 231786 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name geforce rtx 2080 ti major 7 minor 5 memoryclockrate ghz 1 635 pcibusid 0000 01 00 0 2019 06 27 17 09 34 232026 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 06 27 17 09 34 232777 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 06 27 17 09 34 233391 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 06 27 17 09 34 233576 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 06 27 17 09 34 234366 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 06 27 17 09 34 235052 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 06 27 17 09 34 236947 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 06 27 17 09 34 237053 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 06 27 17 09 34 237643 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 06 27 17 09 34 238201 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 2019 06 27 17 09 34 238443 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 06 27 17 09 34 323017 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 06 27 17 09 34 324202 I tensorflow compiler xla service service cc 168 xla service 0x39cfd60 execute computation on platform cuda device 2019 06 27 17 09 34 324217 I tensorflow compiler xla service service cc 175 streamexecutor device 0 geforce rtx 2080 ti compute capability 7 5 2019 06 27 17 09 34 342363 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3600000000 hz 2019 06 27 17 09 34 343364 I tensorflow compiler xla service service cc 168 xla service 0x3d629c0 execute computation on platform host device 2019 06 27 17 09 34 343375 I tensorflow compiler xla service service cc 175 streamexecutor device 0 2019 06 27 17 09 34 343534 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 06 27 17 09 34 344103 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name geforce rtx 2080 ti major 7 minor 5 memoryclockrate ghz 1 635 pcibusid 0000 01 00 0 2019 06 27 17 09 34 344121 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 06 27 17 09 34 344128 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 06 27 17 09 34 344135 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 06 27 17 09 34 344141 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 06 27 17 09 34 344147 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 06 27 17 09 34 344153 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 06 27 17 09 34 344160 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 06 27 17 09 34 344188 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 06 27 17 09 34 344698 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 06 27 17 09 34 345203 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 2019 06 27 17 09 34 345217 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 06 27 17 09 34 345897 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 06 27 17 09 34 345905 I tensorflow core common runtime gpu gpu device cc 1187 0 2019 06 27 17 09 34 345910 I tensorflow core common runtime gpu gpu device cc 1200 0 n 2019 06 27 17 09 34 346029 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 06 27 17 09 34 346619 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 06 27 17 09 34 347293 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 7784 mb memory physical gpu device 0 name geforce rtx 2080 ti pci bus i d 0000 01 00 0 compute capability 7 5 2019 06 27 17 09 35 539500 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 06 27 17 09 35 696347 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 06 27 17 09 36 203198 e tensorflow stream executor cuda cuda dnn cc 329 could not create cudnn handle cudnn status internal error 2019 06 27 17 09 36 209845 e tensorflow stream executor cuda cuda dnn cc 329 could not create cudnn handle cudnn status internal error 2019 06 27 17 09 36 209897 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node my model conv2d conv2d my model dense 1 biasadd 6 2019 06 27 17 09 36 209959 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node my model conv2d conv2d traceback most recent call last file advanced py line 84 in train step image label file usr local lib python3 6 dist package tensorflow python eager def function py line 428 in call return self stateless fn args kwd file usr local lib python3 6 dist package tensorflow python eager function py line 1335 in call return graph function filter call args kwargs pylint disable protect access file usr local lib python3 6 dist package tensorflow python eager function py line 589 in filter call t for t in nest flatten args kwargs expand composite true file usr local lib python3 6 dist package tensorflow python eager function py line 671 in call flat output self inference function call ctx args file usr local lib python3 6 dist package tensorflow python eager function py line 445 in call ctx ctx file usr local lib python3 6 dist package tensorflow python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl unknownerror 2 root error s find 0 unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node my model conv2d conv2d define at advanced py 36 1 unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node my model conv2d conv2d define at advanced py 36 my model dense 1 biasadd 6 0 successful operation 0 derive error ignore op inference train step 848 error may have originate from an input operation input source operation connect to node my model conv2d conv2d image define at advanced py 84 input source operation connect to node my model conv2d conv2d image define at advanced py 84 function call stack train step train step describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
api link for r1 13 direct to r1 12 instead
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change the link under api r1 13 be point to r1 12 the link should point to r1 13
tensorflowtensorflow
keras util vis util plot model raise typeerror inputlayer object be not iterable
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux mint 19 3 tensorflow instal from source or binary binary tensorflow version use command below 2 0 0b1 python version 3 6 cuda cudnn version gpu model and memory gtx 940mx 430 26 describe the current behavior run keras util vis util plot model on a tensorflow python keras engine training model object raise typeerror inputlayer object be not iterable this problem seem very similar to 24622 where the same issue be report for sequential object describe the expect behavior save a png image of the model code to reproduce the issue import tensorflow as tf from keras util vis util import plot model input 2d tf keras input shape none none 3 unknown width and length 3 channel rgb network 2d tf keras layer conv2d filter 128 dimensionality of output space kernel size 5 shape of 2d convolution window 5x5 input 2d input 3d tf keras input shape none none none 1 unknown width length and depth 1 gray channel network 3d tf keras layers conv3d filter 128 dimensionality of output space kernel size 5 shape of 2d convolution window 5x5 input 3d network 2d tf expand dim network 2d axis 2 network combine network 2d network 3d model combine tf keras model input input 2d input 3d output network combine plot model model combine to file model combine png other info log traceback most recent call last file repo venv lib python3 6 site package ipython core interactiveshell py line 3296 in run code exec code obj self user global ns self user n file line 19 in plot model model combine to file model combine png file repo venv lib python3 6 site package keras util vis util py line 132 in plot model dot model to dot model show shape show layer name rankdir file repo venv lib python3 6 site package keras util vis util py line 109 in model to dot for inbound layer in node inbound layer typeerror inputlayer object be not iterable
tensorflowtensorflow
gradient return nan value when use triplet loss with similar batch
Bug
hi I be use tf 1 12 0 my issue be relate to this one it seem that it be closed judge it the benefit be marginal with respect to the effort I stumble on this problem as I be work on my use case I be not sure how common this be but I think it deserve a second look I be work on sequence representation learning and when generate my triplet loss batch I end up sometime generate positive that be similar to my anchor this cause the whole gradient to become nan screenshot from 2019 06 27 13 03 17 the suggest work around of replace nan value with zero post computation isn t sufficient in this case because in reality not the whole batch gradient be null ps I be generate the sequence randomly however with relatively small vocabulary size and horizon generate similar sequence do happen for those have the same issue in my use case the work around I opt for be to either condition my sample or add a small random noise however I think implement the discuss zero solution would be really helpful for future work on sequence representation learn thank
tensorflowtensorflow
different result with the same initial guess
Bug
system information os platform and distribution linux ubuntu 18 04 tensorflow version v1 14 0 rc1 22 gaf24dc91b5 1 14 0 python version python 3 6 8 I get different result with a same initial guess I do not know if this be a bug or how it should be but let I explain what I see here since I do not get any answer from my stack overflow post the follow simple code be adapt from here import os os environ cuda device order pci bus i d os environ cuda visible device 1 import tensorflow as tf import numpy as np x tf placeholder float y tf placeholder float w tf variable 1 0 2 0 name w y model tf multiply x w 0 w 1 error tf square y y model train op tf train gradientdescentoptimizer 0 01 minimize error model tf global variable initializer with tf session as session session run model print initial guess session run w np random seed seed 100 for I in range 1000 x value np random rand y value x value 2 6 session run train op feed dict x x value y y value w value session run w print predict model a 3f x b 3f format a w value 0 b w value 1 from the code I get predict model 2 221x 5 882 however when I replace w with w norm tf variable 0 5 1 0 name w norm w w norm 2 0 the result be predict model 2 004x 5 998 even though it have same initial guess 1 2 and same number of epoch I wonder that make this difference
tensorflowtensorflow
autograph should handle for loop over range in a manner that be compatible with xla compilation
Bug
system information tensorflow version you be use 1 14 be you willing to contribute it yes no no describe the feature and the current behavior state consider the follow python code python import tensorflow as tf autograph tf contrib autograph xla tf contrib compiler xla tf enable eager execution tf function def bad loop x count for in range count x 1 return x tf function def good loop x count I 0 while I count x 1 I 1 return x bad loop be the intuitive way to write this loop however it fail to compile with xla xla compile bad loop 1 0 3 invalidargumenterror input 1 to node statefulpartitionedcall range with op range must be a compile time constant xla compilation require that operator argument that represent shape or dimension be evaluate to concrete value at compile time this error mean that a shape or dimension argument could not be evaluate at compile time usually because the value of the argument depend on a parameter to the computation on a variable or on a stateful operation such as a random number generator node statefulpartitionedcall range this error might be occur with the use of xla compile if it be not necessary that every op be compile with xla an alternative be to use auto jit with optimizeroption global jit level on 2 or the environment variable tf xla flag tf xla auto jit 2 which will attempt to use xla to compile as much of the graph as the compiler be able to cluster op inference xla compile wrapper 166 in contrast good loop calculate the correct result xla compile good loop 1 0 3 autograph seem to always convert range into tf range even in for loop this mean that xla can t compile the function however the equivalent loop write as a naive while loop work ideally autograph would detect such use of range in for loop and convert they into the style of good loop automatically rather than require user to do this this would let we write clean code will this change the current api how no who will benefit with this feature user who want to write normal python code with autograph any other info
tensorflowtensorflow
can t cross compile tflite minimal for raspberry pi zero armv6 target
Bug
system information os platform and distribution linux debian 10 tensorflow instal from source tensorflow version master branch a6989c95e336eb9f73ac7dd2920bee76d7e7e49e gcc compiler version 8 3 0 debian 8 3 0 2 python instal use bazel cuda gpu n a background hi I ve be try to build a tensorflow lite c project for the raspberry pi zero for a simple base I m work from the minimal example provide build the tensorflow lite static library natively can take upwards of 5 6 hour and for simplicity I ve be try to cross compile however I haven t yet get minimal to compile although there s documentation on compile for armv7 pi s I haven t be able to find anything on an armv6 platform step I ve take follow the doc here git clone the tf repo run download dependency sh modify build rpi lib sh to target armv6 I e with cc prefix arm linux gnueabihf make j 3 f tensorflow lite tool make makefile target rpi target arch armv6 first attempt run that build script it quickly error out in a few different header say sorry unimplemented thumb 1 hard float vfp abi for example in file include from usr arm linux gnueabihf include c 8 bit stl algobase h 62 from usr arm linux gnueabihf include c 8 memory 62 from tensorflow lite arena planner h 18 from tensorflow lite arena planner cc 15 usr arm linux gnueabihf include c 8 ext type trait h in function bool gnu cxx be null pointer std nullptr t usr arm linux gnueabihf include c 8 ext type trait h 162 35 sorry unimplemented thumb 1 hard float vfp abi be null pointer std nullptr t make tensorflow lite tool make makefile 242 home cgeary1 tensorflow tensorflow lite tool make gen rpi armv6 obj tensorflow lite allocation o error 1 I wonder if I could get away use the traditional arm instruction set instead of thumb add marm to cflag and cxxflag in rpi makefile inc under the armv6 ifeq then try again I e ifeq target arch armv6 cxxflag march armv6 mfpu vfp funsafe math optimization ftree vectorize fpic marm cflag march armv6 mfpu vfp funsafe math optimization ftree vectorize fpic marm ldflag wl no export dynamic wl exclude lib all wl gc section wl as need endif compile for a little while and finish libtensorflow lite a but it error out as soon as it move on to minimal complaint look like undefined reference to atomic load 8 and undefined reference to flatbuffer classiclocale instance relevant log be attach for all the detail log txt I m not sure whether I m do something wrong whether there s a bug in the make system or whether there may be a more intractable problem try to cross compile this codebase for the pi zero I m not really sure what else to try so for now I m just work on the pi edit I m not able to try this right now but I ve realize that the flatbuffer error be likely to be solve as in 29806 the atomic load 8 error be the one that I can t get past
tensorflowtensorflow
variable sized input doesn t work with dilate convolutional layer
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 10 tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 python version 3 6 cuda cudnn version cpu gpu model and memory cpu describe the current behavior I have variable sized axis on the input for a dilate conv layer which fail when inputte the second batch datum be pad batch wise I have try additional padding both for dilation rate 0 and 2 0 without luck this allow for some of the layer to work but not all so it might be a simple calculation I need to figure out what additional padding to use but I can not figure out what this be describe the expect behavior when run without dilation I m able to handle the variable sized input also the fact that the conv layer with dilation can handle any input as long as its the first I would expect that no additional padding would be need code to reproduce the issue import tensorflow as tf bs 5 text len 1 772 text len 2 741 embed size 300 in channel 1 test in 1 tf random normal bs text len 1 embed size in channel test in 2 tf random normal bs text len 2 embed size in channel dilate convs tf keras layer conv2d filter 10 kernel size 2 embed size dilation rate dilation 1 padding valid for dilation in range 2 23 for conv in dilate convs re conv test in 1 for conv in dilate convs fail here regardless of test in 1 or 2 be call first re conv test in 2 other info log 2019 06 25 13 50 33 329503 w tensorflow core framework op kernel cc 1546 op require fail at spacetobatch op cc 219 invalid argument pad shape 0 741 be not divisible by block shape 0 2 traceback most recent call last file home name anaconda3 envs 3 6 lib python3 6 site package ipython core interactiveshell py line 3296 in run code exec code obj self user global ns self user n file line 1 in runfile home name pycharm2019 1 config scratch dilate conv test py wdir home name pycharm2019 1 config scratch file snap pycharm professional 136 helper pydev pydev bundle pydev umd py line 197 in runfile pydev import execfile filename global var local var execute the script file snap pycharm professional 136 helper pydev pydev imp pydev execfile py line 18 in execfile exec compile content n file exec glob loc file home name pycharm2019 1 config scratch dilate conv test py line 22 in re conv test in 2 file home name anaconda3 envs 3 6 lib python3 6 site package tensorflow python keras engine base layer py line 712 in call output self call input args kwargs file home name anaconda3 envs 3 6 lib python3 6 site package tensorflow python keras layers convolutional py line 196 in call output self convolution op input self kernel file home name anaconda3 envs 3 6 lib python3 6 site package tensorflow python op nn op py line 1078 in call return self conv op inp filter file home name anaconda3 envs 3 6 lib python3 6 site package tensorflow python op nn op py line 634 in call return self call inp filter file home name anaconda3 envs 3 6 lib python3 6 site package tensorflow python op nn op py line 617 in with space to batch call input inp block shape dilation rate padding padding file home name anaconda3 envs 3 6 lib python3 6 site package tensorflow python ops gen array op py line 9246 in space to batch nd six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror pad shape 0 741 be not divisible by block shape 0 2 op spacetobatchnd
tensorflowtensorflow
tf 2 0 put tensor into some numpy function continuously increase memory usage
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary pip package tensorflow 2 0 0 beta1 tensorflow version use command below v2 0 0 beta0 17 g8e423e3 2 0 0 beta1 python version 3 7 3 cuda cudnn version 10 0 0 7 3 1 gpu model and memory titan xp 11 gb describe the current behavior memory leak when we put tensor into some numpy function ex np array np zero like follow attach code continuously increase memory usage describe the expect behavior no memory usage explosion code to reproduce the issue import tensorflow as tf import numpy as np import time x tf random normal 1024 1024 for I in range int 1e7 y np array x time sleep 0 01
tensorflowtensorflow
tensorflow webpage api tab link error
Bug
description of issue what need change the tensorflow api for 1 13 lead to 1 12 page kindly fix the hyperlink clear description the change in the api list can cause confusion request visual if applicable image
tensorflowtensorflow
memory leak with tf tensor getitem in tf datum dataset
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 osx and linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pipenv install pre tensorflow 2 0 0 beta1 tensorflow version use command below 2 0 0 beta1 python version 3 6 8 describe the current behavior as describe in tf slice I try to use tf tensor getitem to perform a slice in a more pythonic way but run into memory leak python def map to xy dataset csv dataset param window size param window size key window shift param window shift key n worker tf datum experimental autotune memory leak in this function def split xy window for x select all but the last value and flatten range x tf shape window 1 1 cause memory leak x tf reshape window 0 range x 1 select a single y value cause memory leak y window 0 1 return x y turn the csv dataset row x col tensor row dataset csv dataset map lambda x tf reshape x len x windowe dataset row dataset window size window size shift window shift drop remainder true flat map lambda x x batch window size xy dataset windowe dataset map split xy num parallel call n worker return xy dataset I have not try to reproduce this behavior with tf slice I will update this issue if I do describe the expect behavior no memory leak code to reproduce the issue see above other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach initial related comment issuecomment 505585437
tensorflowtensorflow
documentation page css be mess up
Bug
bug description url s with the issue description of issue what need change documentation page be unreadable on safari for mac please see the screenshot on subsequent page load it somehow load just fine clear description see above correct link n a parameter define n a return define n a raise list and define n a usage example see attach screenshot request visual if applicable yes see attach screenshot submit a pull request no
tensorflowtensorflow
autograph fail to parse source code error when use lambda in for loop
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macosx 10 13 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below version 2 0 0 dev20190625 git version v1 12 1 4885 g71241a6afd python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior I get an autograph error when run the follow code see the full stacktrace below python import tensorflow as tf ds tf datum dataset range 10 window 5 shift 1 drop remainder true for window in ds flat map lambda window window batch 5 print window numpy the error be valueerror fail to parse source code of at 0x11194c488 describe the expect behavior everything work fine when I define the dataset on the previous line like this python import tensorflow as tf ds tf datum dataset range 10 window 5 shift 1 drop remainder true ds ds flat map lambda window window batch 5 for window in ds print window numpy code to reproduce the issue see above other info log full stack trace with autograph verbosity 10 2019 06 25 22 24 13 172683 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 06 25 22 24 13 197405 I tensorflow compiler xla service service cc 168 xla service 0x7fa5e8a657c0 execute computation on platform host device 2019 06 25 22 24 13 197445 I tensorflow compiler xla service service cc 175 streamexecutor device 0 convert call at 0x134b81488 args variantdataset shape type tf int64 kwargs not whiteliste default rule not whiteliste at 0x134b81488 default rule entity at 0x134b81488 be not cache for key at 0x13b5f7ed0 file line 4 subkey frozenset convert at 0x134b81488 warning log before flag parsing go to stderr e0625 22 24 13 215670 140735810999168 ag log py 133 error convert at 0x134b81488 traceback most recent call last file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 78 in parse entity return parse str source preamble len len future feature source file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 140 in parse str module node gast parse src file user ageron miniconda3 envs tf2 lib python3 6 site package gast gast py line 240 in parse return ast to gast ast parse args kwargs file user ageron miniconda3 envs tf2 lib python3 6 ast py line 35 in parse return compile source filename mode pycf only ast file line 5 for window in ds flat map lambda window window batch 5 syntaxerror unexpected eof while parse during handling of the above exception another exception occur traceback most recent call last file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 118 in parse entity return parse str source preamble len len future feature source file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 140 in parse str module node gast parse src file user ageron miniconda3 envs tf2 lib python3 6 site package gast gast py line 240 in parse return ast to gast ast parse args kwargs file user ageron miniconda3 envs tf2 lib python3 6 ast py line 35 in parse return compile source filename mode pycf only ast file line 5 for window in ds flat map lambda window window batch 5 syntaxerror unexpected eof while parse during handling of the above exception another exception occur traceback most recent call last file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl api py line 635 in to graph return conversion convert entity program ctx file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl conversion py line 322 in convert free nonglobal var name file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl conversion py line 240 in convert with cache entity program ctx file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl conversion py line 441 in convert entity to ast node name entity info convert func to ast o program ctx file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl conversion py line 601 in convert func to ast node source parser parse entity f future feature future feature file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 123 in parse entity source to n nbut that do not work format source file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 66 in raise parse failure format entity source comment valueerror fail to parse source code of at 0x134b81488 which python report as from future import absolute import from future import division from future import print function from future import unicode literal for window in ds flat map lambda window window batch 5 if this be a lambda function the error may be avoid by create the lambda in a standalone statement try to strip down the source to from future import absolute import from future import division from future import print function from future import unicode literal for window in ds flat map lambda window window batch 5 but that do not work error error convert at 0x134b81488 traceback most recent call last file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 78 in parse entity return parse str source preamble len len future feature source file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 140 in parse str module node gast parse src file user ageron miniconda3 envs tf2 lib python3 6 site package gast gast py line 240 in parse return ast to gast ast parse args kwargs file user ageron miniconda3 envs tf2 lib python3 6 ast py line 35 in parse return compile source filename mode pycf only ast file line 5 for window in ds flat map lambda window window batch 5 syntaxerror unexpected eof while parse during handling of the above exception another exception occur traceback most recent call last file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 118 in parse entity return parse str source preamble len len future feature source file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 140 in parse str module node gast parse src file user ageron miniconda3 envs tf2 lib python3 6 site package gast gast py line 240 in parse return ast to gast ast parse args kwargs file user ageron miniconda3 envs tf2 lib python3 6 ast py line 35 in parse return compile source filename mode pycf only ast file line 5 for window in ds flat map lambda window window batch 5 syntaxerror unexpected eof while parse during handling of the above exception another exception occur traceback most recent call last file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl api py line 635 in to graph return conversion convert entity program ctx file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl conversion py line 322 in convert free nonglobal var name file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl conversion py line 240 in convert with cache entity program ctx file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl conversion py line 441 in convert entity to ast node name entity info convert func to ast o program ctx file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl conversion py line 601 in convert func to ast node source parser parse entity f future feature future feature file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 123 in parse entity source to n nbut that do not work format source file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 66 in raise parse failure format entity source comment valueerror fail to parse source code of at 0x134b81488 which python report as from future import absolute import from future import division from future import print function from future import unicode literal for window in ds flat map lambda window window batch 5 if this be a lambda function the error may be avoid by create the lambda in a standalone statement try to strip down the source to from future import absolute import from future import division from future import print function from future import unicode literal for window in ds flat map lambda window window batch 5 but that do not work error transforming entity at 0x134b81488 traceback most recent call last file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 78 in parse entity return parse str source preamble len len future feature source file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 140 in parse str module node gast parse src file user ageron miniconda3 envs tf2 lib python3 6 site package gast gast py line 240 in parse return ast to gast ast parse args kwargs file user ageron miniconda3 envs tf2 lib python3 6 ast py line 35 in parse return compile source filename mode pycf only ast file line 5 for window in ds flat map lambda window window batch 5 syntaxerror unexpected eof while parse during handling of the above exception another exception occur traceback most recent call last file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 118 in parse entity return parse str source preamble len len future feature source file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 140 in parse str module node gast parse src file user ageron miniconda3 envs tf2 lib python3 6 site package gast gast py line 240 in parse return ast to gast ast parse args kwargs file user ageron miniconda3 envs tf2 lib python3 6 ast py line 35 in parse return compile source filename mode pycf only ast file line 5 for window in ds flat map lambda window window batch 5 syntaxerror unexpected eof while parse during handling of the above exception another exception occur traceback most recent call last file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl api py line 635 in to graph return conversion convert entity program ctx file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl conversion py line 322 in convert free nonglobal var name file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl conversion py line 240 in convert with cache entity program ctx file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl conversion py line 441 in convert entity to ast node name entity info convert func to ast o program ctx file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl conversion py line 601 in convert func to ast node source parser parse entity f future feature future feature file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 123 in parse entity source to n nbut that do not work format source file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph pyct parser py line 66 in raise parse failure format entity source comment valueerror fail to parse source code of at 0x134b81488 which python report as from future import absolute import from future import division from future import print function from future import unicode literal for window in ds flat map lambda window window batch 5 if this be a lambda function the error may be avoid by create the lambda in a standalone statement try to strip down the source to from future import absolute import from future import division from future import print function from future import unicode literal for window in ds flat map lambda window window batch 5 but that do not work during handling of the above exception another exception occur traceback most recent call last file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl api py line 528 in convert call experimental optional feature option optional feature file user ageron miniconda3 envs tf2 lib python3 6 site package tensorflow core python autograph impl api py line 639 in to graph entity e class name str e tensorflow python autograph impl api conversionerror convert at 0x134b81488 valueerror fail to parse source code of at 0x134b81488 which python report as from future import absolute import from future import division from future import print function from future import unicode literal for window in ds flat map lambda window window batch 5 if this be a lambda function the error may be avoid by create the lambda in a standalone statement try to strip down the source to from future import absolute import from future import division from future import print function from future import unicode literal for window in ds flat map lambda window window batch 5 but that do not work w0625 22 24 13 223130 140735810999168 ag log py 146 entity at 0x134b81488 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause convert at 0x134b81488 valueerror fail to parse source code of at 0x134b81488 which python report as from future import absolute import from future import division from future import print function from future import unicode literal for window in ds flat map lambda window window batch 5 if this be a lambda function the error may be avoid by create the lambda in a standalone statement try to strip down the source to from future import absolute import from future import division from future import print function from future import unicode literal for window in ds flat map lambda window window batch 5 but that do not work warn entity at 0x134b81488 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause convert at 0x134b81488 valueerror fail to parse source code of at 0x134b81488 which python report as from future import absolute import from future import division from future import print function from future import unicode literal for window in ds flat map lambda window window batch 5 if this be a lambda function the error may be avoid by create the lambda in a standalone statement try to strip down the source to from future import absolute import from future import division from future import print function from future import unicode literal for window in ds flat map lambda window window batch 5 but that do not work 2019 06 25 22 24 13 243343 w tensorflow compiler jit mark for compilation pass cc 1541 one time warn not use xla cpu for cluster because envvar tf xla flag tf xla cpu global jit be not set if you want xla cpu either set that envvar or use experimental jit scope to enable xla cpu to confirm that xla be active pass vmodule xla compilation cache 1 as a proper command line flag not via tf xla flag or set the envvar xla flag xla hlo profile 0 1 2 3 4 1 2 3 4 5 2 3 4 5 6 3 4 5 6 7 4 5 6 7 8 5 6 7 8 9
tensorflowtensorflow
fit method fail with unnamed input layer whereas training work with a custom function
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 13 6 tensorflow instal from source or binary from pip install tensorflow version use command below v1 12 1 3259 gf59745a381 2 0 0 beta0 python version v3 6 7 6ec5cf24b7 oct 20 2018 03 02 14 describe the current behavior I use a model write with the functional api which use a densefeature layer the call method of the model and a custom training function work well even if the input layer be unnamed but the fit method fail this force I to name the input layer the same name as the key of the dictionary give as input I be note sure if this be intend or not but as this work with a custom training function I tend to think it may be a bug in the fit method code to reproduce the issue here be a minimal working example reproduce the issue import tensorflow as tf print use tensorflow version git version format tf version version tf version git version import numpy as np from tensorflow datum import dataset from tensorflow feature column import numeric column from tensorflow keras import input model from tensorflow keras layer import densefeature dense from tensorflow keras loss import meansquarederror from tensorflow keras optimizer import adam def make model fc1 numeric column fc name dict input fc name input 1 out densefeature fc1 dict input out dense 5 name dense feature1 out out dense 1 name dense ouput out return model input dict input output out array np one 1000 1 dtype np float array target np one 1000 1 dtype np float batch size 4 dict array teddy bear array input dataset dataset from tensor slice dict array batch batch size target dataset dataset from tensor slice array target batch batch size complete dataset dataset zip input dataset target dataset shuffle 10000 model make model model summary for x y in complete dataset take 1 print model x loss fn meansquarederror optimizer adam learning rate 1e 3 tf function def train step input target with tf gradienttape as tape output model input loss loss fn target output grad tape gradient loss model trainable variable optimizer apply gradient zip grad model trainable variable return loss epoch 5 loss 0 for epoch in range epoch for x y in complete dataset loss train step x y print epoch n d loss 5 4f format epoch 1 loss for x y in complete dataset take 1 print model x model make model model summary model compile optimizer loss fn model fit complete dataset epoch epoch for x y in complete dataset take 1 print model x the output of which be use tensorflow version 2 0 0 beta0 git version v1 12 1 3259 gf59745a381 tf tensor 0 18571138 0 18571138 0 18571138 0 18571138 shape 4 1 dtype float32 epoch n 1 loss 0 0035 epoch n 2 loss 0 0000 epoch n 3 loss 0 0000 epoch n 4 loss 0 0000 epoch n 5 loss 0 0000 tf tensor 0 99999917 0 99999917 0 99999917 0 99999917 shape 4 1 dtype float32 model model 1 layer type output shape param input 2 inputlayer none 1 0 dense feature 1 densefeatu none 1 0 dense feature1 dense none 5 10 dense ouput dense none 1 6 total param 16 trainable param 16 non trainable param 0 epoch 1 5 keyerror traceback most recent call last document tf2beta lib python3 6 site package tensorflow python keras engine training util py in standardize input data datum name shape check batch axis exception prefix 446 if datum x class name dataframe else datum x 447 for x in name 448 document tf2beta lib python3 6 site package tensorflow python keras engine training util py in 0 446 if datum x class name dataframe else datum x 447 for x in name 448 keyerror input 2 during handling of the above exception another exception occur valueerror traceback most recent call last in 59 model summary 60 model compile optimizer loss fn 61 model fit complete dataset epoch epoch 62 for x y in complete dataset take 1 63 print model x document tf2beta lib python3 6 site package tensorflow python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 641 max queue size max queue size 642 worker worker 643 use multiprocesse use multiprocesse 644 645 def evaluate self document tf2beta lib python3 6 site package tensorflow python keras engine training generator py in fit self model x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq kwargs 692 shuffle shuffle 693 initial epoch initial epoch 694 step name step per epoch 695 696 def evaluate self document tf2beta lib python3 6 site package tensorflow python keras engine training generator py in model iteration model datum step per epoch epoch verbose callback validation datum validation step validation freq class weight max queue size worker use multiprocesse shuffle initial epoch mode batch size step name kwargs 262 263 be defer not model be compile 264 batch out batch function batch datum 265 if not isinstance batch out list 266 batch out batch out document tf2beta lib python3 6 site package tensorflow python keras engine train py in train on batch self x y sample weight class weight reset metric 894 x y sample weight self standardize user datum 895 x y sample weight sample weight class weight class weight 896 extract tensor from dataset true 897 898 if self distribution strategy be true then we be in a replica context document tf2beta lib python3 6 site package tensorflow python keras engine training py in standardize user datum self x y sample weight class weight batch size check step step name step validation split shuffle extract tensor from dataset 2426 feed input shape 2427 check batch axis false don t enforce the batch size 2428 exception prefix input 2429 2430 if y be not none document tf2beta lib python3 6 site package tensorflow python keras engine training util py in standardize input data datum name shape check batch axis exception prefix 449 except keyerror as e 450 raise valueerror no datum provide for e args 0 need datum 451 for each key in str name 452 elif isinstance datum list tuple 453 if isinstance datum 0 list tuple valueerror no datum provide for input 2 need datum for each key in input 2 we see on the output that the call method of the model work and that the training with the custom function also work we also see in the error log that the error be due to the input layer name input 2 not find the key input 2 in the dictionary produce by complete dataset now in the code above if we replace the line dict input fc name input 1 by dict input fc name input 1 name teddy bear everything work and the output look like this use tensorflow version 2 0 0 beta0 git version v1 12 1 3259 gf59745a381 tf tensor 0 7686093 0 7686093 0 7686093 0 7686093 shape 4 1 dtype float32 epoch n 1 loss 0 0000 epoch n 2 loss 0 0000 epoch n 3 loss 0 0000 epoch n 4 loss 0 0000 epoch n 5 loss 0 0000 tf tensor 0 99999994 0 99999994 0 99999994 0 99999994 shape 4 1 dtype float32 model model 1 layer type output shape param teddy bear inputlayer none 1 0 dense feature 1 densefeatu none 1 0 dense feature1 dense none 5 10 dense ouput dense none 1 6 total param 16 trainable param 16 non trainable param 0 epoch 1 5 250 250 1s 2ms step loss 0 0325 epoch 2 5 250 250 0s 1ms step loss 3 5115e 14 epoch 3 5 250 250 0s 2ms step loss 1 4211e 14 epoch 4 5 250 250 0s 1ms step loss 1 4211e 14 epoch 5 5 250 250 0s 1ms step loss 1 4211e 14 tf tensor 1 0000002 1 0000002 1 0000002 1 0000002 shape 4 1 dtype float32
tensorflowtensorflow
iterator model prefetch batch shuffle parallelinterleavev2 return outofrange without set end of sequence
Bug
see issuecomment 505539468 for more detail system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 osx and linux tensorflow instal from source or binary pipenv install pre tensorflow 2 0 0 beta1 tensorflow version use command below 2 0 0 beta1 python version 3 6 8 describe the current behavior I m run into this issue when cache before interleave python filename dataset filename dataset cache some path return filename dataset interleave lambda f map file to xy dataset f predict task fn param cycle length param cycle length key block length param block length key num parallel call tf data experimental autotune iterator iterator model prefetch batch shuffle parallelinterleavev2 return outofrange without set end of sequence this indicate that an error may have occur original message attempt to call get next after iteration should have finish op iteratorgetnextsync describe the expect behavior able to cache the filename dataset code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem see above other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
break image link in recurrent quickdraw md
Bug
description of issue the image link for quickdraw model png be break in document recurrent quickdraw md the current source url for the image point to non existent image location url recurrent quickdraw md
tensorflowtensorflow
get validation step from dict break keras fit tf 2 0 0 beta1
Bug
system information tensorflow instal from pip tensorflow version 2 0 0 beta1 python version 3 6 describe the current behavior when call fit on a keras model in 2 0 0 beta1 a keyerror 0 be throw when try to call fit with validation datum give as a dict this work in the alpha e g mymodel fit x input 0 train datum type 0 input 1 train datum type 1 y train label validation datum input 0 val datum type 0 input 1 val datum type 1 val label the issue seem to have be introduce here as the line val sample or step val input and val input 0 shape 0 or none assume an array will be find and fail for dict describe the expect behaviour val input either a list or dictionary of array or a dataset instance
tensorflowtensorflow
nccl reduce sum throw an error
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes script for test nccl reduce sum attach below os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version python 3 6 cuda cudnn version 10 1 7 gpu model and memory v100 16 gb describe the current behavior nccl reduce sum throw and error about feed device or fetch device not be find in the graph nccl all reduce work describe the expect behavior expect to get a reduce tensor code to reproduce the issue import tensorflow as tf from tensorflow python op import nccl op as nccl with tf device gpu 0 a tf constant 1 2 3 4 5 dtype tf float32 with tf device gpu 1 b tf constant 6 7 8 9 10 dtype tf float32 with tf device gpu 0 c nccl reduce sum a b sess tf session print sess run c other info log ubuntu ip 172 31 26 69 tensorpack example resnet python test nccl py 2019 06 24 20 41 57 077685 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 06 24 20 41 57 803050 I tensorflow stream executor cuda cuda gpu executor cc 998 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 06 24 20 41 57 828402 I tensorflow stream executor cuda cuda gpu executor cc 998 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 06 24 20 41 57 841325 I tensorflow stream executor cuda cuda gpu executor cc 998 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 06 24 20 41 57 852582 I tensorflow stream executor cuda cuda gpu executor cc 998 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 06 24 20 41 57 853980 I tensorflow compiler xla service service cc 150 xla service 0x4a9c130 execute computation on platform cuda device 2019 06 24 20 41 57 854028 I tensorflow compiler xla service service cc 158 streamexecutor device 0 tesla v100 sxm2 16 gb compute capability 7 0 2019 06 24 20 41 57 854053 I tensorflow compiler xla service service cc 158 streamexecutor device 1 tesla v100 sxm2 16 gb compute capability 7 0 2019 06 24 20 41 57 854077 I tensorflow compiler xla service service cc 158 streamexecutor device 2 tesla v100 sxm2 16 gb compute capability 7 0 2019 06 24 20 41 57 854101 I tensorflow compiler xla service service cc 158 streamexecutor device 3 tesla v100 sxm2 16 gb compute capability 7 0 2019 06 24 20 41 57 860344 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2300060000 hz 2019 06 24 20 41 57 865200 I tensorflow compiler xla service service cc 150 xla service 0x4bbd9f0 execute computation on platform host device 2019 06 24 20 41 57 865270 I tensorflow compiler xla service service cc 158 streamexecutor device 0 2019 06 24 20 41 57 866085 I tensorflow core common runtime gpu gpu device cc 1433 find device 0 with property name tesla v100 sxm2 16 gb major 7 minor 0 memoryclockrate ghz 1 53 pcibusid 0000 00 1b 0 totalmemory 15 75gib freememory 6 38gib 2019 06 24 20 41 57 866225 I tensorflow core common runtime gpu gpu device cc 1433 find device 1 with property name tesla v100 sxm2 16 gb major 7 minor 0 memoryclockrate ghz 1 53 pcibusid 0000 00 1c 0 totalmemory 15 75gib freememory 3 71gib 2019 06 24 20 41 57 866329 I tensorflow core common runtime gpu gpu device cc 1433 find device 2 with property name tesla v100 sxm2 16 gb major 7 minor 0 memoryclockrate ghz 1 53 pcibusid 0000 00 1d 0 totalmemory 15 75gib freememory 3 71gib 2019 06 24 20 41 57 866442 I tensorflow core common runtime gpu gpu device cc 1433 find device 3 with property name tesla v100 sxm2 16 gb major 7 minor 0 memoryclockrate ghz 1 53 pcibusid 0000 00 1e 0 totalmemory 15 75gib freememory 6 68gib 2019 06 24 20 41 57 866498 I tensorflow core common runtime gpu gpu device cc 1512 add visible gpu device 0 1 2 3 2019 06 24 20 41 57 872295 I tensorflow core common runtime gpu gpu device cc 984 device interconnect streamexecutor with strength 1 edge matrix 2019 06 24 20 41 57 872370 I tensorflow core common runtime gpu gpu device cc 990 0 1 2 3 2019 06 24 20 41 57 872395 I tensorflow core common runtime gpu gpu device cc 1003 0 n y y y 2019 06 24 20 41 57 872413 I tensorflow core common runtime gpu gpu device cc 1003 1 y n y y 2019 06 24 20 41 57 872425 I tensorflow core common runtime gpu gpu device cc 1003 2 y y n y 2019 06 24 20 41 57 872440 I tensorflow core common runtime gpu gpu device cc 1003 3 y y y n 2019 06 24 20 41 57 872826 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 6203 mb memory physical gpu device 0 name tesla v100 sxm2 16 gb pci bus i d 0000 00 1b 0 compute capability 7 0 2019 06 24 20 41 57 873247 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 3496 mb memory physical gpu device 1 name tesla v100 sxm2 16 gb pci bus i d 0000 00 1c 0 compute capability 7 0 2019 06 24 20 41 57 873562 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 2 with 3498 mb memory physical gpu device 2 name tesla v100 sxm2 16 gb pci bus i d 0000 00 1d 0 compute capability 7 0 2019 06 24 20 41 57 873856 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 3 with 6496 mb memory physical gpu device 3 name tesla v100 sxm2 16 gb pci bus i d 0000 00 1e 0 compute capability 7 0 traceback most recent call last file home ubuntu venv lib python3 6 site package tensorflow python client session py line 1334 in do call return fn args file home ubuntu venv lib python3 6 site package tensorflow python client session py line 1319 in run fn option feed dict fetch list target list run metadata file home ubuntu venv lib python3 6 site package tensorflow python client session py line 1407 in call tf sessionrun run metadata tensorflow python framework error impl invalidargumenterror tensor ncclreduce 0 specify in either feed device or fetch device be not find in the graph during handling of the above exception another exception occur traceback most recent call last file test nccl py line 17 in print sess run c file home ubuntu venv lib python3 6 site package tensorflow python client session py line 929 in run run metadata ptr file home ubuntu venv lib python3 6 site package tensorflow python client session py line 1152 in run feed dict tensor option run metadata file home ubuntu venv lib python3 6 site package tensorflow python client session py line 1328 in do run run metadata file home ubuntu venv lib python3 6 site package tensorflow python client session py line 1348 in do call raise type e node def op message tensorflow python framework error impl invalidargumenterror tensor ncclreduce 0 specify in either feed device or fetch device be not find in the graph
tensorflowtensorflow
tf 2 0 api docs tf image extract glimpse
Bug
url s with the issue description of issue what need change raise list and define raise be not list and define usage example no usage example be give pull request
tensorflowtensorflow
attributeerror module tensorflow api v2 train have no attribute ftrloptimizer
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change ftrloptimizer be not accessible from tf train python or estimator use the ftrl optimizer with regularization estimator linearclassifi feature column categorical column a categorical feature a x categorical feature b optimizer tf train ftrloptimizer learning rate 0 1 l1 regularization strength 0 001 should be optimizer tf keras optimizer ftrl attributeerror module tensorflow api v2 train have no attribute ftrloptimizer clear description n a correct link n a parameter define n a return define n a raise list and define n a usage example n a request visual if applicable n a submit a pull request n a
tensorflowtensorflow
multiple duplicate tflite android example project
Bug
description there be two example for tflite image classification on android on the tensorflow repo which achieve the same thing they both have almost identical code but slight difference in ui and in code example a example b I would like to know which one be suppose to be use and whether they can be merge so that this confusion be avoid example of a difference in example a we do not create a nnapi delegate l179 but in example b we do create an nnapi delegate l186
tensorflowtensorflow
default value not list for hyperparameter in tfp exponentiatedquadratic class
Bug
url s with the issue description of issue what need change list default value for hyperparameter amplitude and length scale that be use when a kernel be create with default value as suggest here example tfd tfp distribution psd kernel tfp positive semidefinite kernel define a kernel with default parameter kernel psd kernel exponentiatedquadratic parameter define default value for amplitude and length scale be not list apart from a keyword none
tensorflowtensorflow
classify handwritten digit with tf learn machine learn jupyter notebook on docker
Bug
invalidargumenterror traceback most recent call last opt conda lib python3 7 site package tensorflow python client session py in do call self fn args 1333 try 1334 return fn args 1335 except error operror as e opt conda lib python3 7 site package tensorflow python client session py in run fn feed dict fetch list target list option run metadata 1318 return self call tf sessionrun 1319 option feed dict fetch list target list run metadata 1320 opt conda lib python3 7 site package tensorflow python client session py in call tf sessionrun self option feed dict fetch list target list run metadata 1406 self session option feed dict fetch list target list 1407 run metadata 1408 invalidargumenterror tensor name linear weight shape in shape and slice spec 1 10 do not match the shape store in checkpoint 784 10 node save restorev2 during handling of the above exception another exception occur invalidargumenterror traceback most recent call last opt conda lib python3 7 site package tensorflow python training saver py in restore self sess save path 1275 sess run self saver def restore op name 1276 self saver def filename tensor name save path 1277 except error notfounderror as err opt conda lib python3 7 site package tensorflow python client session py in run self fetch feed dict option run metadata 928 result self run none fetch feed dict option ptr 929 run metadata ptr 930 if run metadata opt conda lib python3 7 site package tensorflow python client session py in run self handle fetch feed dict option run metadata 1151 result self do run handle final target final fetch 1152 feed dict tensor option run metadata 1153 else opt conda lib python3 7 site package tensorflow python client session py in do run self handle target list fetch list feed dict option run metadata 1327 return self do call run fn feed fetch target option 1328 run metadata 1329 else opt conda lib python3 7 site package tensorflow python client session py in do call self fn args 1347 message error interpolation interpolate message self graph 1348 raise type e node def op message 1349 invalidargumenterror tensor name linear weight shape in shape and slice spec 1 10 do not match the shape store in checkpoint 784 10 node save restorev2 define at 2 cause by op save restorev2 define at file opt conda lib python3 7 runpy py line 193 in run module as main main mod spec file opt conda lib python3 7 runpy py line 85 in run code exec code run global file opt conda lib python3 7 site package ipykernel launcher py line 16 in app launch new instance file opt conda lib python3 7 site package traitlet config application py line 658 in launch instance app start file opt conda lib python3 7 site package ipykernel kernelapp py line 505 in start self io loop start file opt conda lib python3 7 site package tornado platform asyncio py line 148 in start self asyncio loop run forever file opt conda lib python3 7 asyncio base event py line 539 in run forever self run once file opt conda lib python3 7 asyncio base event py line 1775 in run once handle run file opt conda lib python3 7 asyncio event py line 88 in run self context run self callback self args file opt conda lib python3 7 site package tornado ioloop py line 690 in lambda f self run callback functool partial callback future file opt conda lib python3 7 site package tornado ioloop py line 743 in run callback ret callback file opt conda lib python3 7 site package tornado gen py line 781 in inner self run file opt conda lib python3 7 site package tornado gen py line 742 in run yield self gen send value file opt conda lib python3 7 site package ipykernel kernelbase py line 357 in process one yield gen maybe future dispatch args file opt conda lib python3 7 site package tornado gen py line 209 in wrapper yield next result file opt conda lib python3 7 site package ipykernel kernelbase py line 267 in dispatch shell yield gen maybe future handler stream ident msg file opt conda lib python3 7 site package tornado gen py line 209 in wrapper yield next result file opt conda lib python3 7 site package ipykernel kernelbase py line 534 in execute request user expression allow stdin file opt conda lib python3 7 site package tornado gen py line 209 in wrapper yield next result file opt conda lib python3 7 site package ipykernel ipkernel py line 294 in do execute res shell run cell code store history store history silent silent file opt conda lib python3 7 site package ipykernel zmqshell py line 536 in run cell return super zmqinteractiveshell self run cell args kwargs file opt conda lib python3 7 site package ipython core interactiveshell py line 2848 in run cell raw cell store history silent shell future file opt conda lib python3 7 site package ipython core interactiveshell py line 2874 in run cell return runner coro file opt conda lib python3 7 site package ipython core async helper py line 67 in pseudo sync runner coro send none file opt conda lib python3 7 site package ipython core interactiveshell py line 3049 in run cell async interactivity interactivity compiler compiler result result file opt conda lib python3 7 site package ipython core interactiveshell py line 3214 in run ast node if yield from self run code code result file opt conda lib python3 7 site package ipython core interactiveshell py line 3296 in run code exec code obj self user global ns self user n file line 2 in print predict d label d classifier predict test data 0 test label 0 file opt conda lib python3 7 site package tensorflow python util deprecation py line 574 in new func return func args kwargs file opt conda lib python3 7 site package tensorflow python util deprecation py line 574 in new func return func args kwargs file opt conda lib python3 7 site package tensorflow contrib learn python learn estimator linear py line 539 in predict as iterable as iterable file opt conda lib python3 7 site package tensorflow python util deprecation py line 574 in new func return func args kwargs file opt conda lib python3 7 site package tensorflow contrib learn python learn estimator linear py line 574 in predict class as iterable as iterable file opt conda lib python3 7 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file opt conda lib python3 7 site package tensorflow contrib learn python learn estimator estimator py line 670 in predict iterate batch iterate batch file opt conda lib python3 7 site package tensorflow contrib learn python learn estimator estimator py line 974 in infer model config self session config file opt conda lib python3 7 site package tensorflow python training monitor session py line 934 in init stop grace period sec stop grace period sec file opt conda lib python3 7 site package tensorflow python training monitor session py line 648 in init self sess recoverablesession self coordinate creator file opt conda lib python3 7 site package tensorflow python training monitor session py line 1122 in init wrappedsession init self self create session file opt conda lib python3 7 site package tensorflow python training monitor session py line 1127 in create session return self sess creator create session file opt conda lib python3 7 site package tensorflow python training monitor session py line 805 in create session self tf sess self session creator create session file opt conda lib python3 7 site package tensorflow python training monitor session py line 562 in create session self scaffold finalize file opt conda lib python3 7 site package tensorflow python training monitor session py line 217 in finalize self saver training saver get saver or default pylint disable protect access file opt conda lib python3 7 site package tensorflow python training saver py line 604 in get saver or default saver saver sharde true allow empty true file opt conda lib python3 7 site package tensorflow python training saver py line 832 in init self build file opt conda lib python3 7 site package tensorflow python training saver py line 844 in build self build self filename build save true build restore true file opt conda lib python3 7 site package tensorflow python training saver py line 881 in build build save build save build restore build restore file opt conda lib python3 7 site package tensorflow python training saver py line 507 in build internal restore sequentially reshape file opt conda lib python3 7 site package tensorflow python training saver py line 385 in addshardedrestoreop name restore shard file opt conda lib python3 7 site package tensorflow python training saver py line 332 in addrestoreop restore sequentially file opt conda lib python3 7 site package tensorflow python training saver py line 580 in bulk restore return io op restore v2 filename tensor name slice dtype file opt conda lib python3 7 site package tensorflow python ops gen io op py line 1572 in restore v2 name name file opt conda lib python3 7 site package tensorflow python framework op def library py line 788 in apply op helper op def op def file opt conda lib python3 7 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file opt conda lib python3 7 site package tensorflow python framework op py line 3300 in create op op def op def file opt conda lib python3 7 site package tensorflow python framework op py line 1801 in init self traceback tf stack extract stack invalidargumenterror see above for traceback tensor name linear weight shape in shape and slice spec 1 10 do not match the shape store in checkpoint 784 10 node save restorev2 define at 2 during handling of the above exception another exception occur invalidargumenterror traceback most recent call last in 1 here s one it get right 2 print predict d label d classifier predict test data 0 test label 0 3 display 0 opt conda lib python3 7 site package tensorflow python util deprecation py in new func args kwargs 572 func module arg name arg value in a future version 573 if date be none else after s date instruction 574 return func args kwargs 575 576 doc add deprecate arg value notice to docstre opt conda lib python3 7 site package tensorflow python util deprecation py in new func args kwargs 572 func module arg name arg value in a future version 573 if date be none else after s date instruction 574 return func args kwargs 575 576 doc add deprecate arg value notice to docstre opt conda lib python3 7 site package tensorflow contrib learn python learn estimator linear py in predict self x input fn batch size output as iterable 537 input fn input fn 538 batch size batch size 539 as iterable as iterable 540 return super linearclassifi self predict 541 x x opt conda lib python3 7 site package tensorflow python util deprecation py in new func args kwargs 572 func module arg name arg value in a future version 573 if date be none else after s date instruction 574 return func args kwargs 575 576 doc add deprecate arg value notice to docstre opt conda lib python3 7 site package tensorflow contrib learn python learn estimator linear py in predict class self x input fn batch size as iterable 572 batch size batch size 573 output key 574 as iterable as iterable 575 if as iterable 576 return as iterable pred output key opt conda lib python3 7 site package tensorflow python util deprecation py in new func args kwargs 505 in a future version if date be none else after s date 506 instruction 507 return func args kwargs 508 509 doc add deprecate arg notice to docstre opt conda lib python3 7 site package tensorflow contrib learn python learn estimator estimator py in predict self x input fn batch size output as iterable iterate batch 668 output output 669 as iterable as iterable 670 iterate batch iterate batch 671 672 def get variable value self name opt conda lib python3 7 site package tensorflow contrib learn python learn estimator estimator py in infer model self input fn feed fn output as iterable iterate batch 972 checkpoint filename with path checkpoint path 973 scaffold infer op scaffold 974 config self session config 975 if not as iterable 976 with mon sess opt conda lib python3 7 site package tensorflow python training monitor session py in init self session creator hook stop grace period sec 932 super monitoredsession self init 933 session creator hook should recover true 934 stop grace period sec stop grace period sec 935 936 opt conda lib python3 7 site package tensorflow python training monitor session py in init self session creator hook should recover stop grace period sec 646 stop grace period sec stop grace period sec 647 if should recover 648 self sess recoverablesession self coordinate creator 649 else 650 self sess self coordinate creator create session opt conda lib python3 7 site package tensorflow python training monitor session py in init self sess creator 1120 1121 self sess creator sess creator 1122 wrappedsession init self self create session 1123 1124 def create session self opt conda lib python3 7 site package tensorflow python training monitor session py in create session self 1125 while true 1126 try 1127 return self sess creator create session 1128 except preemption error as e 1129 log info an error be raise while a session be be create opt conda lib python3 7 site package tensorflow python training monitor session py in create session self 803 create a coordinate session 804 keep the tf sess for unit test 805 self tf sess self session creator create session 806 we don t want coordinator to suppress any exception 807 self coord coordinator coordinator clean stop exception type opt conda lib python3 7 site package tensorflow python training monitor session py in create session self 569 init op self scaffold init op 570 init feed dict self scaffold init feed dict 571 init fn self scaffold init fn 572 573 opt conda lib python3 7 site package tensorflow python training session manager py in prepare session self master init op saver checkpoint dir checkpoint filename with path wait for checkpoint max wait sec config init feed dict init fn 279 wait for checkpoint wait for checkpoint 280 max wait sec max wait sec 281 config config 282 if not be load from checkpoint 283 if init op be none and not init fn and self local init op be none opt conda lib python3 7 site package tensorflow python training session manager py in restore checkpoint self master saver checkpoint dir checkpoint filename with path wait for checkpoint max wait sec config 193 194 if checkpoint filename with path 195 saver restore sess checkpoint filename with path 196 return sess true 197 opt conda lib python3 7 site package tensorflow python training saver py in restore self sess save path 1310 we add a more reasonable error message here to help user b 110263146 1311 raise wrap restore error with msg 1312 err a mismatch between the current graph and the graph 1313 1314 staticmethod invalidargumenterror restore from checkpoint fail this be most likely due to a mismatch between the current graph and the graph from the checkpoint please ensure that you have not alter the graph expect base on the checkpoint original error tensor name linear weight shape in shape and slice spec 1 10 do not match the shape store in checkpoint 784 10 node save restorev2 define at 2 cause by op save restorev2 define at file opt conda lib python3 7 runpy py line 193 in run module as main main mod spec file opt conda lib python3 7 runpy py line 85 in run code exec code run global file opt conda lib python3 7 site package ipykernel launcher py line 16 in app launch new instance file opt conda lib python3 7 site package traitlet config application py line 658 in launch instance app start file opt conda lib python3 7 site package ipykernel kernelapp py line 505 in start self io loop start file opt conda lib python3 7 site package tornado platform asyncio py line 148 in start self asyncio loop run forever file opt conda lib python3 7 asyncio base event py line 539 in run forever self run once file opt conda lib python3 7 asyncio base event py line 1775 in run once handle run file opt conda lib python3 7 asyncio event py line 88 in run self context run self callback self args file opt conda lib python3 7 site package tornado ioloop py line 690 in lambda f self run callback functool partial callback future file opt conda lib python3 7 site package tornado ioloop py line 743 in run callback ret callback file opt conda lib python3 7 site package tornado gen py line 781 in inner self run file opt conda lib python3 7 site package tornado gen py line 742 in run yield self gen send value file opt conda lib python3 7 site package ipykernel kernelbase py line 357 in process one yield gen maybe future dispatch args file opt conda lib python3 7 site package tornado gen py line 209 in wrapper yield next result file opt conda lib python3 7 site package ipykernel kernelbase py line 267 in dispatch shell yield gen maybe future handler stream ident msg file opt conda lib python3 7 site package tornado gen py line 209 in wrapper yield next result file opt conda lib python3 7 site package ipykernel kernelbase py line 534 in execute request user expression allow stdin file opt conda lib python3 7 site package tornado gen py line 209 in wrapper yield next result file opt conda lib python3 7 site package ipykernel ipkernel py line 294 in do execute res shell run cell code store history store history silent silent file opt conda lib python3 7 site package ipykernel zmqshell py line 536 in run cell return super zmqinteractiveshell self run cell args kwargs file opt conda lib python3 7 site package ipython core interactiveshell py line 2848 in run cell raw cell store history silent shell future file opt conda lib python3 7 site package ipython core interactiveshell py line 2874 in run cell return runner coro file opt conda lib python3 7 site package ipython core async helper py line 67 in pseudo sync runner coro send none file opt conda lib python3 7 site package ipython core interactiveshell py line 3049 in run cell async interactivity interactivity compiler compiler result result file opt conda lib python3 7 site package ipython core interactiveshell py line 3214 in run ast node if yield from self run code code result file opt conda lib python3 7 site package ipython core interactiveshell py line 3296 in run code exec code obj self user global ns self user n file line 2 in print predict d label d classifier predict test data 0 test label 0 file opt conda lib python3 7 site package tensorflow python util deprecation py line 574 in new func return func args kwargs file opt conda lib python3 7 site package tensorflow python util deprecation py line 574 in new func return func args kwargs file opt conda lib python3 7 site package tensorflow contrib learn python learn estimator linear py line 539 in predict as iterable as iterable file opt conda lib python3 7 site package tensorflow python util deprecation py line 574 in new func return func args kwargs file opt conda lib python3 7 site package tensorflow contrib learn python learn estimator linear py line 574 in predict class as iterable as iterable file opt conda lib python3 7 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file opt conda lib python3 7 site package tensorflow contrib learn python learn estimator estimator py line 670 in predict iterate batch iterate batch file opt conda lib python3 7 site package tensorflow contrib learn python learn estimator estimator py line 974 in infer model config self session config file opt conda lib python3 7 site package tensorflow python training monitor session py line 934 in init stop grace period sec stop grace period sec file opt conda lib python3 7 site package tensorflow python training monitor session py line 648 in init self sess recoverablesession self coordinate creator file opt conda lib python3 7 site package tensorflow python training monitor session py line 1122 in init wrappedsession init self self create session file opt conda lib python3 7 site package tensorflow python training monitor session py line 1127 in create session return self sess creator create session file opt conda lib python3 7 site package tensorflow python training monitor session py line 805 in create session self tf sess self session creator create session file opt conda lib python3 7 site package tensorflow python training monitor session py line 562 in create session self scaffold finalize file opt conda lib python3 7 site package tensorflow python training monitor session py line 217 in finalize self saver training saver get saver or default pylint disable protect access file opt conda lib python3 7 site package tensorflow python training saver py line 604 in get saver or default saver saver sharde true allow empty true file opt conda lib python3 7 site package tensorflow python training saver py line 832 in init self build file opt conda lib python3 7 site package tensorflow python training saver py line 844 in build self build self filename build save true build restore true file opt conda lib python3 7 site package tensorflow python training saver py line 881 in build build save build save build restore build restore file opt conda lib python3 7 site package tensorflow python training saver py line 507 in build internal restore sequentially reshape file opt conda lib python3 7 site package tensorflow python training saver py line 385 in addshardedrestoreop name restore shard file opt conda lib python3 7 site package tensorflow python training saver py line 332 in addrestoreop restore sequentially file opt conda lib python3 7 site package tensorflow python training saver py line 580 in bulk restore return io op restore v2 filename tensor name slice dtype file opt conda lib python3 7 site package tensorflow python ops gen io op py line 1572 in restore v2 name name file opt conda lib python3 7 site package tensorflow python framework op def library py line 788 in apply op helper op def op def file opt conda lib python3 7 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file opt conda lib python3 7 site package tensorflow python framework op py line 3300 in create op op def op def file opt conda lib python3 7 site package tensorflow python framework op py line 1801 in init self traceback tf stack extract stack invalidargumenterror see above for traceback restore from checkpoint fail this be most likely due to a mismatch between the current graph and the graph from the checkpoint please ensure that you have not alter the graph expect base on the checkpoint original error tensor name linear weight shape in shape and slice spec 1 10 do not match the shape store in checkpoint 784 10 node save restorev2 define at 2
tensorflowtensorflow
method mention in documentation be miss tf kera preprocesse text tokenizer from json
Bug
system information tensorflow version v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 describe the current behavior error raise when attempt to call tf keras preprocesse text tokenizer from json attributeerror traceback most recent call last 1 tf kera preprocesse text tokenizer from json attributeerror module tensorflow python keras api v2 kera preprocesse text have no attribute tokenizer from json describe the expect behavior I expect this method to be callable per the documentation of tf keras preprocesse text tokenizer to json below which say we can use kera preprocesse text tokenizer from json json string to load a tokenizer otherwise there s no obvious way to use the output of tokenizer to json to restore a tokenizer signature tf keras preprocesse text tokenizer to json self kwargs docstre return a json string contain the tokenizer configuration to load a tokenizer from a json string use kera preprocesse text tokenizer from json json string argument kwargs additional keyword argument to be pass to json dump return a json string contain the tokenizer configuration code to reproduce the issue python3 json string tf keras preprocesse text tokenizer to json tf keras preprocesse text tokenizer from json json string
tensorflowtensorflow
multi gpu training tf compat v1 scatter sub operation throw exception
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow extend from the stock example work example be attach to this bug report os platform and distribution linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 beta1 python version python 3 5 2 bazel version if compile from source n a 0 23 2 gcc compiler version if compile from source n a 5 4 0 20160609 cuda cudnn version 10 0 130 7 4 1 gpu model and memory 4x titan xp 12 gb standalong no sli connection describe the current behavior tf compat v1 scatter sub do not support multi gpu training attributeerror tensor object have no attribute lazy read be throw in the tf compat v1 scatter sub operation there be no equivalent scatter operation in tf compat v2 module describe the expect behavior tf compat v1 scatter sub or equivalent scatter operation should support multi gpu training code to reproduce the issue 1 the script attach be use to train a model use multiple gpu the code be modify from the stock example for quick experiment other info log warn log before flag parsing go to stderr w0622 23 56 34 418763 140543953778432 cross device op py 1164 some request device in tf distribute strategy be not visible to tensorflow job localhost replica 0 task 0 device gpu 1 job localhost replica 0 task 0 device gpu 0 number of device 2 traceback most recent call last file center loss mnist py line 326 in run 0 0001 file center loss mnist py line 259 in run final output side output my model main input aux input file center loss mnist py line 119 in my model side centerlosslayer alpha 0 5 name centerlosslayer x label file usr local lib python3 5 dist package tensorflow python keras engine base layer py line 662 in call output call fn input args kwargs file usr local lib python3 5 dist package tensorflow python autograph impl api py line 169 in wrapper raise e ag error metadata to exception type e attributeerror in convert code center loss mnist py 179 call new center tf compat v1 scatter sub self center x 1 delta center usr local lib python3 5 dist package tensorflow python op state op py 535 scatter sub return ref lazy read gen resource variable op resource scatter sub pylint disable protect access usr local lib python3 5 dist package tensorflow python distribute value py 381 getattr return getattr self get name attributeerror tensor object have no attribute lazy read run the script use the follow command export cuda visible device 0 python3 center loss mnist py save the follow script as center loss mnist py python from datetime import datetime from tensorflow keras import backend as k from tensorflow keras import initializer from tensorflow keras import loss from tensorflow keras import optimizer from tensorflow keras callback import tensorboard from tensorflow keras datasets import mnist from tensorflow keras layer import activation from tensorflow keras layer import batchnormalization from tensorflow keras layer import conv2d from tensorflow keras layer import dense from tensorflow keras layer import dropout from tensorflow keras layers import flatten from tensorflow keras layers import input from tensorflow keras layer import layer from tensorflow keras layers import maxpool2d from tensorflow keras layer import prelu from tensorflow keras model import model from tensorflow keras regularizer import l2 import math import numpy as np import os import shutil import tensorflow as tf parameter batch size 64 epoch 10 weight decay 0 0005 def init gpu soft device placement true log device placement false create virtual device false memory limit 4096 tf config set soft device placement soft device placement tf debugging set log device placement log device placement gpu tf config experimental list physical device gpu if gpu if there be only one gpu create two logical virtual device for develop on a machine with only one gpu instal try create 2 virtual gpu on each physical gpu with the give memory limit gpu memory if create virtual device and len gpu 1 tf config experimental set virtual device configuration gpu 0 tf config experimental virtualdeviceconfiguration memory limit 4096 tf config experimental virtualdeviceconfiguration memory limit 4096 else currently memory growth need to be the same across gpu for gpu in gpu tf config experimental set memory growth gpu true except runtimeerror as e memory growth must be set before gpu have be initialize print e print out physical and logical gpu logical gpu tf config experimental list logical device gpu print len gpu physical gpu len logical gpu logical gpu else print no visible gpu be detect def prelu x name default if name default return prelu alpha initializer initializers constant value 0 25 x else return prelu alpha initializer initializers constant value 0 25 name name x def center loss y true y pre center loss base on the paper a discriminative feature learn approach for deep face recognition lc 1 2 sum xi ci return 0 5 k sum y pre axis 0 model def my model x label x batchnormalization x x conv2d filter 32 kernel size 5 5 stride 1 1 padding same kernel regularizer l2 weight decay x x batchnormalization x x prelu x x conv2d filter 32 kernel size 5 5 stride 1 1 padding same kernel regularizer l2 weight decay x x batchnormalization x x prelu x x maxpool2d pool size 2 2 stride 2 2 padding valid x x conv2d filter 64 kernel size 5 5 stride 1 1 padding same kernel regularizer l2 weight decay x x batchnormalization x x prelu x x conv2d filter 64 kernel size 5 5 stride 1 1 padding same kernel regularizer l2 weight decay x x batchnormalization x x prelu x x maxpool2d pool size 2 2 stride 2 2 padding valid x x conv2d filter 128 kernel size 5 5 stride 1 1 padding same kernel regularizer l2 weight decay x x batchnormalization x x prelu x x conv2d filter 128 kernel size 5 5 stride 1 1 padding same kernel regularizer l2 weight decay x x batchnormalization x x prelu x x maxpool2d pool size 2 2 stride 2 2 padding valid x x dropout 0 25 x x flatten x x dense 2 kernel regularizer l2 weight decay x x prelu x name side out main dense 10 activation softmax name main out kernel regularizer l2 weight decay x side centerlosslayer alpha 0 5 name centerlosslayer x label return main side function for decay the learning rate you can define any decay function you need def lr schedule epoch if epoch 5 learning rate 1e 3 elif epoch 10 learning rate 1e 4 else learning rate 1e 5 tf summary scalar learning rate datum learning rate step epoch return learn rate class centerlosslayer layer def init self alpha 0 5 kwargs super init kwargs self alpha alpha def build self input shape self center self add weight name center shape 10 2 initializer uniform trainable false super build input shape def call self x this be where the layer s logic live argument input input tensor or list tuple of input tensor kwargs additional keyword argument return a tensor or list tuple of tensor feature x 0 label k reshape x 1 1 get the tensor as specify in the label the center might repeate depend on the label index center batch k gather self center label unique label unique idx unique count tf unique with count label appear time k gather unique count unique idx appear time k reshape appear time 1 1 center loss alfa default 0 5 delta center center batch feature delta center delta center tf cast 1 appear time tf float32 delta center self alpha delta center scatter sub do not support multi gpu training there be no equivalent operation new center tf compat v1 scatter sub self center x 1 delta center self add update self center new center x self result k sum k square feature center batch axis 1 keepdim true return self result def compute output shape self input shape return k int shape self result def empty dir folder empty a folder recursively for file in os listdir folder file path os path join folder file if os path isfile file path print remove file format file path os remove file path else empty dir file path print remove folder format file path os rmdir file path def build empty dir folder root dir os getcwd base dir os path join root dir folder os makedirs base dir exist ok true empty dir os path join root dir folder return base dir unset cuda visible device export cuda visible device 0 python3 center loss mnist py run model def run lambda centerloss init gpu log device placement false create virtual device true strategy tf distribute mirroredstrategy cross device op tf distribute reductiontoonedevice strategy tf distribute mirroredstrategy cross device op tf distribute hierarchicalcopyallreduce strategy tf distribute mirroredstrategy cross device op tf distribute ncclallreduce strategy tf distribute mirroredstrategy print number of device format strategy num replicas in sync get datum x train y train x test y test mnist load datum normalize to 0 1 x train x test x train 255 x test 255 x train np float32 x train x test np float32 x test y train np int32 y train y test np int32 y test reshape to matrix x train x train reshape 1 28 28 1 x test x test reshape 1 28 28 1 compile main input input 28 28 1 aux input input dtype int32 training use multi gpu tf compat v1 scatter sub do not support multi gpu train with strategy scope the follow exception with be throw attributeerror tensor object have no attribute lazy read comment out the follow line for the training to run successfully with strategy scope final output side output my model main input aux input model model input main input aux input output final output side output model compile optimizer adam loss loss sparse categorical crossentropy center loss metric accuracy loss weight 1 lambda centerloss model summary create the log directory log dir tmp log datetime strftime datetime now y m d h m s build empty dir log dir initialize the file writer for log summary create the file writer to save event for tensorboard summary log dir log dir train file writer tf summary create file writer summary log dir file writer set as default tb callback tensorboard log dir log dir lr callback tf keras callbacks learningratescheduler lr schedule fit dummy1 np zero x train shape 0 1 dtype int dummy2 np zero x test shape 0 1 dtype int print model input 0 shape model input 0 shape print model get layer side out output shape model get layer side out output shape model fit x train y train input main input aux input y train dummy1 output final output side output batch size batch size epoch epoch verbose 1 validation datum x test y test y test dummy2 callback tb callback lr callback validation reduce model model input model input 0 output model get layer main out output reduce model compile loss sparse categorical crossentropy optimizer tf keras optimizer adam metric accuracy evaluate eval loss eval acc reduce model evaluate x x test y y test batch size batch size print neval loss eval accuracy format eval loss eval acc run training and val set reduce model model input model input 0 output model get layer side out output feat reduce model predict x train do k clear session return if name main run 0 0001
tensorflowtensorflow
valueerror argument and signature argument do not match when use dataset api keras functional api and checkpoint callback tf2 0
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 13 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below git version v1 12 1 4759 g9856697d8b tf version 2 0 0 dev20190622 python version 3 6 4 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na describe the current behavior call the fit function on a keras model when specify a dataset and a modelcheckpoint callback will crash after the first epoch with this error valueerror argument and signature argument do not match the error happen only when specify both the training dataset and validation dataset the error happen because of the checkpoint callback describe the expect behavior the model should not crash continue training and successfully save the checkpoint code to reproduce the issue python import tensorflow as tf model architecture input tf keras input shape 784 name flatten image x tf keras layer dense 64 activation relu input x tf keras layer dense 64 activation relu x output tf keras layer dense 10 activation softmax name prediction x model tf keras model inputs input output output name error showcase loading mnist datum x train y train x test y test tf keras datasets mnist load datum x train x train reshape 60000 784 astype float32 255 x test x test reshape 10000 784 astype float32 255 create the training dataset train dataset tf datum dataset from tensor slice x train y train shuffle batch and prefetch train dataset train dataset shuffle buffer size 1024 batch 64 prefetch 1024 create the validation dataset val dataset tf datum dataset from tensor slice x test y test shuffle batch and prefetch val dataset val dataset batch 64 prefetch 1024 compile the model model compile loss sparse categorical crossentropy optimizer rmsprop metric accuracy define checkpoint callback checkpoint callback tf keras callback modelcheckpoint checkpoint monitor val accuracy verbose 1 save well only true mode max fit the model history model fit train dataset validation datum val dataset epoch 5 callback checkpoint callback print nhistory dict history history other info log 2019 06 22 18 30 17 760500 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 06 22 18 30 17 777440 I tensorflow compiler xla service service cc 168 xla service 0x7f8d2bb0de30 execute computation on platform host device 2019 06 22 18 30 17 777463 I tensorflow compiler xla service service cc 175 streamexecutor device 0 epoch 1 5 warning log before flag parsing go to stderr w0622 18 30 18 333158 140736272085888 deprecation py 323 from temp v36 lib python3 6 site package tensorflow core python ops math grad py 1251 add dispatch support wrapper from tensorflow python op array op be deprecate and will be remove in a future version instruction for update use tf where in 2 0 which have the same broadcast rule as np where w0622 18 30 18 374418 140736272085888 deprecation py 323 from temp v36 lib python3 6 site package tensorflow core python keras optimizer v2 optimizer v2 py 460 baseresourcevariable constraint from tensorflow python op resource variable op be deprecate and will be remove in a future version instruction for update apply a constraint manually follow the optimizer update step 2019 06 22 18 30 18 583083 w tensorflow compiler jit mark for compilation pass cc 1541 one time warn not use xla cpu for cluster because envvar tf xla flag tf xla cpu global jit be not set if you want xla cpu either set that envvar or use experimental jit scope to enable xla cpu to confirm that xla be active pass vmodule xla compilation cache 1 as a proper command line flag not via tf xla flag or set the envvar xla flag xla hlo profile 911 938 eta 0s loss 0 3021 accuracy 0 9133 epoch 00001 val accuracy improve from inf to 0 94660 save model to checkpoint 2019 06 22 18 30 20 643797 w tensorflow python util util cc 268 set be not currently consider sequence but this may change in the future so consider avoid use they w0622 18 30 20 653870 140736272085888 deprecation py 506 from temp v36 lib python3 6 site package tensorflow core python op resource variable op py 1775 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer 938 938 2s 3ms step loss 0 2972 accuracy 0 9147 val loss 0 1739 val accuracy 0 9466 epoch 2 5 traceback most recent call last file error showcase ckpt py line 46 in callback checkpoint callback file temp v36 lib python3 6 site package tensorflow core python keras engine training py line 669 in fit use multiprocesse use multiprocesse file temp v36 lib python3 6 site package tensorflow core python keras engine training generator py line 695 in fit step name step per epoch file temp v36 lib python3 6 site package tensorflow core python keras engine training generator py line 265 in model iteration batch out batch function batch data file temp v36 lib python3 6 site package tensorflow core python keras engine training py line 939 in train on batch output self train function in pylint disable not callable file temp v36 lib python3 6 site package tensorflow core python keras backend py line 3483 in call output self graph fn convert input file temp v36 lib python3 6 site package tensorflow core python eager function py line 583 in call return self call flat args self capture input file temp v36 lib python3 6 site package tensorflow core python eager function py line 685 in call flat output self inference function call ctx args file temp v36 lib python3 6 site package tensorflow core python eager function py line 436 in call len args len list self signature input arg valueerror argument and signature argument do not match 19 20
tensorflowtensorflow
installation documentation and mac wheel
Bug
url s with the issue description of issue what need change clear description there be no binary wheel for tensorflow 1 14 for mac should be here and the documentation page still point to 1 13 be 1 14 still consider unstable be the wheel generation and documentation update simply forget or be it normal that it take a couple of day after the release
tensorflowtensorflow
the api tf datum experimental csvdataset perform very slow in test
Bug
system info cuda 10 1 python 3 6 6 tensorflow 1 13 1 gpu quadro p5000 test datum csv dataset with 430 column all in float the first 429 column as feature and the last column as label there be totally 1928 class and 57909 instance problem I test the speed of the csv reader api tf datum experimental csvdataset and the common method tf placeholder here be the code with tf datum experimental csvdataset def parse datum x n class x tf convert to tensor x return x 1 tf one hot index tf cast x 1 tf int32 depth n class if name main dataset train tf datum experimental csvdataset home david dataset timit test csv tf float32 430 header false field delim dataset train dataset train map lambda x parse datum x 1928 dataset train dataset train batch 128 dataset train dataset train prefetch 1 iterator dataset train make initializable iterator x in y iterator get next x tf layer dense unit 1024 activation tf nn relu x in x tf layer dense unit 1024 activation tf nn relu x x tf layer dense unit 1024 activation tf nn relu x x tf layer dense unit 1024 activation tf nn relu x logit tf layer dense unit 1928 activation none x loss tf loss softmax cross entropy y logit optimizer tf train adamoptimizer optimizer minimize loss sess tf session sess run tf global variable initializer sess run iterator initializer running loss 0 0 time last time time epoch 0 I 0 while true try run loss sess run loss feed dict x datum y label if I 1 5 0 print r epoch 2d batch 5d time 5f loss 3f epoch 1 I 1 time time time last run loss I end time last time time I 1 except tf error outofrangeerror pass with tf placeholder if name main x in tf placeholder shape 128 429 dtype tf float32 y in tf placeholder shape 128 dtype tf int32 y tf one hot y in depth 1928 x tf layer dense unit 1024 activation tf nn relu x in x tf layer dense unit 1024 activation tf nn relu x x tf layer dense unit 1024 activation tf nn relu x x tf layer dense unit 1024 activation tf nn relu x logit tf layer dense unit 1928 activation none x loss tf loss softmax cross entropy y logit optimizer tf train adamoptimizer optimizer minimize loss sess tf session sess run tf global variable initializer w pd read csv home david dataset timit test csv header none delim whitespace true value for epoch in range 23 running loss 0 0 time last time time I 0 index np random permutation w shape 0 w w index while true if I 128 128 w shape 0 break running loss sess run loss feed dict x in w I 128 I 128 128 1 y in w I 128 I 128 128 1 if I 1 5 0 print r epoch 2d batch 5d time 5f loss 3f epoch 1 I 1 time time time last run loss I end time last time time I 1 result time for train five batch tf placeholder method 0 013263s tf datum experimental csvdataset method 1 382647s problem api tf datum experimental csvdataset be so slow in the above test I guess it be partly because that tf datum experimental csvdataset do io operation before each batch to extract datum from csv file be this ture or there be other reason however it be too slow compare to tf placeholder be there any chance for improvement how can I set the tf datum experimental csvdataset api to load all csv datum at the very beginning or can I say that tf datum experimental csvdataset be implement only for the csv dataset that be too big to store in the memory because the time cost seem like intolerable
tensorflowtensorflow
dimension check in binarycrossentropy
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 beta1 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior suppose we build a model for binary classification problem and we want to use binarycrossentropy loss provide in tf keras loss here be an example python import numpy as np import tensorflow as tf y true np array 1 1 1 0 1 0 0 1 1 0 astype np float32 y pre np array 0 0 0 1 1 0 0 1 0 1 astype np float32 print y true shape y pre shape print 10 1 10 1 loss function bce tf keras loss binarycrossentropy case1 print bce y true y pre numpy print 9 23662 correctly case2 print bce np squeeze y true y pre numpy print 8 006299 describe the expect behavior when the dimension of y true and y pre be different in that case the loss function should raise an error for dimension mismatch or the model will fail silently and no one would be ab e to debug it until unless they be aware of this behavior code to reproduce the issue check above
tensorflowtensorflow
invalidargumenterror see above for traceback reshape can not infer the miss input size for an empty tensor unless all specify input size be non zero
Bug
python 3 6 ubuntu 16 04 6 lts tensorflow 1 13 1 feat width 2048 gpu num 8 def create cnn model feat width datum format channel last model tf keras sequential model add tf keras layer conv2d filter 32 kernel size 1 2 stride 1 2 datum format datum format pad same activation relu input shape 2 feat width 1 model add tf keras layer conv2d filter 10 kernel size 2 2 datum format datum format pad same activation relu model add tf keras layers maxpool2d pool size 2 1 datum format datum format model add tf keras layer flatten datum format datum format model add tf keras layer dense unit 10 activation relu model add tf keras layer dense unit 1 kernel initializer uniform activation sigmoid return model def serve input fn build the serve input input input keras model input name 0 tf placeholder shape none 2 feat width 1 dtype tf float32 return tf estimator export build raw serve input receiver fn input strategy tf contrib distribute mirroredstrategy num gpus gpu num estimator config tf estimator runconfig model dir model tf random seed 0 save summary step 256 save checkpoint step 10000 train distribute strategy keep checkpoint max 64 log step count step 256 keras model create cnn model feat width keras model tf keras util multi gpu model keras model gpu gpu num keras model compile optimizer optimizer loss binary crossentropy metric accuracy estimator tf keras estimator model to estimator keras model keras model config estimator config tf estimator train and evaluate estimator train spec eval spec use tf contrib predictor from save model export dir to do inference predictor tf contrib predictor from save model export dir export dir feed tensor list predictor feed tensor key 0 fetch tensor list predictor fetch tensor key 0 feat np random rand 2 feat width feat np reshape feat 2 feat width 1 y predictor feed tensor feat r y fetch tensor fail for multi gpu with error invalidargumenterror see above for traceback reshape can not infer the miss input size for an empty tensor unless all specify input size be non zero node sequential 1 flatten reshape node dense 2 1 concat if I don t use multi gpu the inference code work
tensorflowtensorflow
autoconversion to tensor in function
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 19 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary pip tensorflow version use command below 1 14 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior call certain function with non tensor but convertible to fail since the object be not convert to a tensor the function actually only expect tensor e g tf math real 1 0 fail because 1 have no attribute dtype which it will only have after the conversion to a tensor describe the expect behavior my understanding go that function that take tensor such as tf math real can also take anything that can be convert to a tensor such as python float or object with a register conversion function namely tf math real tf convert to tensor something tf math real something be equivalent I would expect tf math real 1 to return the real part of the tensor tf convert to tensor 1 this cause a big problem with the actual beautiful registration of conversion function the code contain only a minimal example be I mistaken with the expect behavior and if so this may could be make clear in the doc that only tensor be take vs tensor like object e g the doc of tf math round and tf math real leave no clue in which to use tensor and in which tensor like object code to reproduce the issue short version real python tf math real 1 fail but of course also fail for any custom define tensor like object from tensorflow python import op class mytensor def dense var to tensor self dtype name as ref return tf constant 42 dtype dtype def dense var to tensor var dtype none name none as ref false return var dense var to tensor dtype dtype name name as ref as ref op register tensor conversion function mytensor dense var to tensor my t1 mytensor t1 tf convert to tensor my t1 work square tf math round my t1 work real python tf math real 1 fail real tf math real my t1 fail
tensorflowtensorflow
python package be miss modulespec in tensorflow spec in tf 1 14 0
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux 4 9 125 linuxkit x86 64 with ubuntu 18 04 bionic mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary preinstalle in docker image tensorflow version use command below v1 14 0 rc1 22 gaf24dc91b5 1 14 0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior in tf 1 14 0 the module spec in tensorflow spec be none import tensorflow print tensorflow spec none describe the expect behavior this be different from tf 1 13 1 where it work as expect import tensorflow print tensorflow spec modulespec name tensorflow loader frozen importlib external sourcefileloader object at 0x7f038cfc3cf8 origin usr local lib python3 5 dist package tensorflow init py submodule search location usr local lib python3 5 dist package tensorflow miss spec cause some problem e g pkgutil now fail when try to find tensorflow note that the first call to find loader be successful it only fail after tensorflow be import python 3 6 8 default jan 14 2019 11 02 34 gcc 8 0 1 20180414 experimental trunk revision 259383 on linux type help copyright credit or license for more information import pkgutil pkgutil find loader tensorflow frozen importlib external sourcefileloader object at 0x7f62372c7160 import tensorflow pkgutil find loader tensorflow traceback most recent call last file usr lib python3 6 pkgutil py line 490 in find loader spec importlib util find spec fullname file usr lib python3 6 importlib util py line 102 in find spec raise valueerror spec be none format name valueerror tensorflow spec be none the above exception be the direct cause of the follow exception traceback most recent call last file line 1 in file usr lib python3 6 pkgutil py line 496 in find loader raise importerror msg format fullname type ex ex from ex importerror error while find loader for tensorflow tensorflow spec be none code to reproduce the issue see above other info log I ve test this use official tf docker image tensorflow tensorflow 1 14 0 py3 and also use python docker image python 3 6 with tensorflow instal with pip
tensorflowtensorflow
error while convert mobilenet ssd tflite graph pb file to tflite format use tflite convert
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 6 5 bazel version if compile from source none gcc compiler version if compile from source cuda cudnn version none gpu model and memory none I be try to convert a pretraine mobilenetssd to tflite for deployment which be available in tensorflow detection model zoo code to reproduce the issue tflite convert graph def file ssd mobilenet v1 0 75 depth quantize 300x300 coco14 sync 2018 07 18 tflite graph pb output file ssd mobilenet v1 0 75 depth quantize 300x300 coco14 sync 2018 07 18 optimize graph lite input format tensorflow graphdef output format tflite input shape 1 224 224 3 input array input output array final result inference type float input datum type float other info log traceback most recent call last file home yash anaconda3 envs conversion bin tflite convert line 10 in sys exit main file home yash anaconda3 envs conversion lib python3 6 site package tensorflow lite python tflite convert py line 503 in main app run main run main argv sys argv 1 file home yash anaconda3 envs conversion lib python3 6 site package tensorflow python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home yash anaconda3 envs conversion lib python3 6 site package absl app py line 300 in run run main main args file home yash anaconda3 envs conversion lib python3 6 site package absl app py line 251 in run main sys exit main argv file home yash anaconda3 envs conversion lib python3 6 site package tensorflow lite python tflite convert py line 499 in run main convert tf1 model tflite flag file home yash anaconda3 envs conversion lib python3 6 site package tensorflow lite python tflite convert py line 193 in convert tf1 model output datum converter convert file home yash anaconda3 envs conversion lib python3 6 site package tensorflow lite python lite py line 904 in convert converter kwargs file home yash anaconda3 envs conversion lib python3 6 site package tensorflow lite python convert py line 373 in toco convert graph def input datum serializetostre file home yash anaconda3 envs conversion lib python3 6 site package tensorflow lite python convert py line 172 in toco convert protos toco fail see console for info n s n s n stdout stderr tensorflow lite python convert convertererror toco fail see console for info 2019 06 21 03 51 18 946551 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation tflite detection postprocess 2019 06 21 03 51 18 970221 I tensorflow lite toco import tensorflow cc 1385 unable to determine output type for op tflite detection postprocess 2019 06 21 03 51 18 994542 f tensorflow lite toco tooling util cc 918 check fail getopwithoutput model output array specify output array final result be not produce by any op in this graph be it a typo this should not happen if you trigger this error please send a bug report with code to reporduce this error to the tensorflow lite team fatal python error abort current thread 0x00007f9444831740 most recent call first file home yash anaconda3 envs conversion lib python3 6 site package tensorflow lite toco python toco from protos py line 33 in execute file home yash anaconda3 envs conversion lib python3 6 site package absl app py line 251 in run main file home yash anaconda3 envs conversion lib python3 6 site package absl app py line 300 in run file home yash anaconda3 envs conversion lib python3 6 site package tensorflow python platform app py line 40 in run file home yash anaconda3 envs conversion lib python3 6 site package tensorflow lite toco python toco from protos py line 59 in main file home yash anaconda3 envs conversion bin toco from protos line 10 in abort core dump
tensorflowtensorflow
a writing error in tf contrib data map and batch
Bug
url s with the issue description of issue what need change args document of this api num parallel call optional a tf int32 scalar tf tensor represent the number of element to process in parallel if not specify batch size num parallel batch element will be process in parallel the last word should be sequential rather than parallel
tensorflowtensorflow
tf 2 0 api docs tf keras callback learningratescheduler
Bug
tensorflow version 2 0 beta1 doc link description the definition should be expand for clarity and well user experience instead of a one short sentence learning rate scheduler maybe try learn rate scheduler which can be use to adjust the learning rate over time any function can be define such as a decay function that state which decay rate to use after a certain number of epoch or batch then this custom function can be pass as the schedule parameter in the learningrateschuler callback alternatively a description from the distribute training with keras tutorial link in the api doc can also be use learn rate scheduler use this callback you can schedule the learning rate to change after every epoch batch example the word example be miss before the example raise return should be add for improvement suggestion of other tf keras callback class see 29958
tensorflowtensorflow
tf2 apparent memory leak when run dataset op eagerly
Bug
system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 osx tensorflow instal from source or binary 2 0 0beta tensorflow version use command below v1 12 1 3259 gf59745a381 2 0 0 beta0 python version 3 6 8 describe the current behavior when use the function tf autograph to graph I see a memory leak which I don t see if I use the annotation tf function describe the expect behavior there should not be a memory leak code to reproduce the issue python import os import psutil import numpy as np import tensorflow as tf process psutil process os getpid tf function def train epoch model p datum for real input in p data model real input train epoch tf autograph to graph train epoch datum np random normal 0 1 10000 2 p datum tf datum dataset from tensor slice datum batch 32 model tf variable 1 1 dtype tf float64 for I in range 5000 train epoch model p datum if I 50 0 print process memory info rss
tensorflowtensorflow
segmentation fault when save checkpoint with saveable dataset iterator
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 cento 7 6 1810 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 3 6 8 bazel version if compile from source none gcc compiler version if compile from source none cuda cudnn version none gpu model and memory none tf version v1 13 1 0 g6612da8951 1 13 1 describe the current behavior segmentation fault in save an initializable dataset iterator when enter the tf train monitoredsession context manager describe the expect behavior the initializable iterator be save and restore properly behave the same with the one shot iterator code to reproduce the issue python illustrate saveable dataset iterator import tensorflow as tf dataset size 4 save step 2 train step 3 checkpoint dir tmp tf dataset saveable def test saveable test saveable graph tf graph with graph as default dataset tf datum dataset range dataset size repeat dataset iterator dataset make one shot iterator dataset iterator dataset make initializable iterator dataset init dataset iterator initializer datum dataset iterator get next saveable tf contrib datum make saveable from iterator dataset iterator tf add to collection tf graphkey saveable object saveable global step tf train get or create global step inc global step tf assign add global step 1 critical saver tf train saver checkpoint dir checkpoint dir scaffold tf train scaffold saver saver checkpoint hook tf train checkpointsaverhook checkpoint dir checkpoint dir save step save step scaffold scaffold hook checkpoint hook session creator tf train chiefsessioncreator scaffold scaffold checkpoint dir checkpoint dir with tf train monitoredsession session creator session creator hook hook as mon sess gstep mon sess run global step if not gstep mon sess run dataset init for in range train step print mon sess run global step datum mon sess run inc global step if name main test saveable other info log console tf 1 13 py3 huwh1 huwh1 centos worksync python tf dataset saveable py warn tensorflow from home huwh1 virtualenv tf 1 13 py3 lib python3 6 site package tensorflow python data op dataset op py 1419 colocate with from tensorflow python framework op be deprecate and will be remove in a future version instruction for update colocation handle automatically by placer warn the tensorflow contrib module will not be include in tensorflow 2 0 for more information please see if you depend on functionality not list there please file an issue warn tensorflow from tf dataset saveable py 20 make saveable from iterator from tensorflow contrib data python ops iterator op be deprecate and will be remove in a future version instruction for update use tf datum experimental make saveable from iterator 2019 06 20 10 43 20 947675 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 06 20 10 43 20 951984 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3408000000 hz 2019 06 20 10 43 20 952497 I tensorflow compiler xla service service cc 150 xla service 0x3f10ca0 execute computation on platform host device 2019 06 20 10 43 20 952539 I tensorflow compiler xla service service cc 158 streamexecutor device 0 segmentation fault core dump tf 1 13 py3 huwh1 huwh1 centos worksync
tensorflowtensorflow
error saving file while use tf datum experimental tfrecordwriter at write a tfrecord file
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 python notebook use python v3 7 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 beta tfrecord example with tfdata ipynb zip python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory deep learn vm available on gcp you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be follow the tutorial at write a tfrecord file describe the expect behavior I should be able to save the file code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem see attach python notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
distribute tensorflow error check fail devicenameutil parsefullname new base parse name
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 1 13 1 pc 1 6 0 rc0 rp python version 2 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory exact command to reproduce describe the problem try to run a distribute tensorflow example on cpu from command to run the example can be find at it work fine when I run it on single platform pc pc or laptop laptop or rp rp or multiple platform with same architecture pc laptop both x86 or rp rp both arm64 but a combination of arm64 and x86 fail from arm64 side with the follow error bash 2019 06 15 01 20 35 179745 f tensorflow core common runtime rename device cc 27 check fail devicenameutil parsefullname new base parse name source code log note that in your code the ip need to be set accordingly the command for pc be bash python dist setup py job name worker task index 0 the output bash 2019 06 14 18 20 35 040413 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 06 14 18 20 35 070714 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3593265000 hz 2019 06 14 18 20 35 071281 I tensorflow compiler xla service service cc 150 xla service 0x4c9ce60 execute computation on platform host device 2019 06 14 18 20 35 071303 I tensorflow compiler xla service service cc 158 streamexecutor device 0 2019 06 14 18 20 35 072829 I tensorflow core distribute runtime rpc grpc channel cc 252 initialize grpcchannelcache for job ps 0 10 1 1 2 2222 2019 06 14 18 20 35 072861 I tensorflow core distribute runtime rpc grpc channel cc 252 initialize grpcchannelcache for job worker 0 localhost 2223 2019 06 14 18 20 35 074703 I tensorflow core distribute runtime rpc grpc server lib cc 391 start server with target grpc localhost 2223 warn tensorflow from usr local lib python2 7 dist package tensorflow python framework op def library py 263 colocate with from tensorflow python framework op be deprecate and will be remove in a future version instruction for update colocation handle automatically by placer 2019 06 14 18 20 35 178858 I tensorflow core distribute runtime master session cc 1192 start master session 3634afcffbd6cc2d with config 2019 06 14 18 20 45 214939 w tensorflow core distribute runtime master session cc 1363 timeout for closing worker session 2019 06 14 18 20 55 218267 I tensorflow core distribute runtime master cc 267 createsession still wait for response from worker job ps replica 0 task 0 2019 06 14 18 21 05 218392 I tensorflow core distribute runtime master cc 267 createsession still wait for response from worker job ps replica 0 task 0 2019 06 14 18 21 15 218519 I tensorflow core distribute runtime master cc 267 createsession still wait for response from worker job ps replica 0 task 0 the command for rp be bash python dist setup py job name ps task index 0 the output bash usr local lib python2 7 dist package tensorflow python framework tensor util py 33 runtimewarning numpy dtype size change may indicate binary incompatibility expect 96 get 88 from tensorflow python framework import fast tensor util 2019 06 15 01 19 54 226102 I tensorflow core distribute runtime rpc grpc channel cc 215 initialize grpcchannelcache for job ps 0 localhost 2222 2019 06 15 01 19 54 226278 I tensorflow core distribute runtime rpc grpc channel cc 215 initialize grpcchannelcache for job worker 0 10 1 1 1 2223 2019 06 15 01 19 54 227740 I tensorflow core distribute runtime rpc grpc server lib cc 324 start server with target grpc localhost 2222 2019 06 15 01 20 35 179745 f tensorflow core common runtime rename device cc 27 check fail devicenameutil parsefullname new base parse name abort the source code from github bash simple example with one parameter server and one worker author tommy mulc from future import print function import tensorflow as tf import argparse import time import os flag none log dir logdir def main distribute baggage cluster tf train clusterspec ps localhost 2222 worker localhost 2223 let this node know about all other node if flag job name ps check if parameter server server tf train server cluster job name ps task index flag task index server join else be chief flag task index 0 check if this be the chief node server tf train server cluster job name worker task index flag task index graph with tf device cpu 0 a tf variable tf truncated normal shape 2 dtype tf float32 b tf variable tf truncated normal shape 2 dtype tf float32 c a b target tf constant 100 shape 2 dtype tf float32 loss tf reduce mean tf square c target opt tf train gradientdescentoptimizer 0001 minimize loss session monitor training session sess tf train monitoredtrainingsession master server target be chief be chief for I in range 1000 if sess should stop break sess run opt if I 10 0 r sess run c print r time sleep 1 sess close if name main parser argparse argumentparser flag for define the tf train clusterspec parser add argument job name type str default help one of ps worker flag for define the tf train server parser add argument task index type int default 0 help index of task within the job flag unparse parser parse know args main
tensorflowtensorflow
custom filter
Bug
I want to build custom filter with tf nn conv2d
tensorflowtensorflow
gpu docs don t show how to test your install
Bug
how do you test your install to make sure everything go ok
tensorflowtensorflow
typeerror moment v2 get an unexpected keyword argument keep dim
Bug
typeerror moment v2 get an unexpected keyword argument keep dim
tensorflowtensorflow
tf 2 0 api docs tf keras callbacks sub class
Bug
tensorflow version 2 0 beta1 sub class of tf keras callback baselogger history callback csvlogger modelcheckpoint progbarlogger remotemonitor tensorboard and terminateonnan summary since tf keras callback be important for monitor model during train the api doc require extra attention imo example to add see below description to be define well see below especially the callback custom class return raise to add for well ux when need suggest improvement miss example one example per sub class would be enough for good ux and a link to a tutorial present an extra step for a user e g a short example inside the doc similar to the one in earlystopping link reducelronplateau link or lambdacallback link note modelcheckpoint and tensorboard have link to tutorial which have example baselogger and history be apply by default but that may not help understand they well example of a custom callback from fchollet deep learning with python book python class activationlogger keras callbacks callback def set model self model self model model call by the parent model before train to inform the callback of what model will be call it layer output layer output for layer in model layer self activation model keras model model model input layer output model instance that return the activation of every layer def on epoch end self epoch log none if self validation datum be none raise runtimeerror require validation datum validation sample self validation datum 0 0 1 obtain the first input sample of the validation datum activation self activation model predict validation sample f open activation at epoch str epoch npz w save array to disk np savez f activation f close example of a tensorboard callback from fchollet deep learning with python book python callback keras callback tensorboard log dir my log dir location of log file histogram freq 1 record activation histogram every 1 epoch embedding freq 1 record embed datum every 1 epoch history model fit x train y train epoch 20 batch size 128 validation split 0 2 callback callback browse to and look at your model training description to be define well if need recommend to use the follow medium post which have quite decent description of each callback note mention in the earlystopping and modelcheckpoint description that they be should be both typically use together see callback list example from fchollet with 2 callback pass into model fit below to stop training when improvement stop and save the current good model during training save well only true python a list of 2 or more callback that can be pass into model fit callback list keras callback earlystopping monitor acc patience 1 keras callback modelcheckpoint save the current weight after every epoch filepath my model h5 monitor val loss save well only true these two argument mean you win t overwrite the model file unless val loss have improve model compile optimizer rmsprop loss binary crossentropy metric acc model fit x y epoch 10 batch size 32 callback callback list validation datum x val y val return raise to be define define well if need doc link
tensorflowtensorflow
create a low level python tutorial for tf 2
Bug
description of issue what need change it be unclear how to translate the follow code to tf 2 python matplotlib inline import tensorflow as tf from matplotlib import pyplot as plt from tqdm auto import tqdm trange import numpy as np tf compat v1 disable eager execution y np array 10 1 6 3 1 7 x np array 4 15 1 1 7 1 trajs for j in range y shape 0 g tf compat v1 graph with g as default sess tf compat v1 session graph g yt tf compat v1 placeholder tf float64 xt tf compat v1 variable x trainable true elementwiseloss tf compat v1 reduce sum yt xt 2 axis 1 loss tf compat v1 reduce sum elementwiseloss opt tf compat v1 train adamoptimizer 0 1 minimize loss init tf compat v1 global variable initializer sess run init epoch 300 for I in trange epoch lossr x elr sess run opt loss xt elementwiseloss feed dict yt y if not I 10 for j in range y shape 0 trajs j append x j elr j print x x print y y print y x y x trajs np array trajs for I m in enumerate o plt scatter trajs I 0 trajs I 1 marker m c trajs I 2 cmap rainbow label str i plt legend plt grid plt colorbar plt show the solution python matplotlib inline import tensorflow as tf from matplotlib import pyplot as plt from tqdm auto import tqdm trange import numpy as np y np array 10 1 6 3 1 7 x np array 4 15 1 1 7 1 trajs for j in range y shape 0 yt tf variable y xt tf variable x tf function def elementwiseloss return tf reduce sum yt xt 2 axis 1 tf function def loss return tf reduce sum elementwiseloss optr tf optimizer adam 0 1 epoch 300 for I in trange epoch optr minimize loss xt lossr loss numpy x xt numpy elr elementwiseloss numpy if not I 10 for j in range yt shape 0 trajs j append xt j elr j print x x print y y print y x y x trajs np array trajs for I m in enumerate o plt scatter trajs I 0 trajs I 1 marker m c trajs I 2 cmap rainbow label str i plt legend plt grid plt colorbar plt show clear description it would be nice to have a tutorial page contain both v1 style solution and v2 style one
tensorflowtensorflow
which license must be provide with my application when use the tensorflow c api
Bug
I be develop a commercial application that use the tensorflow c api at all kind of place for example I find that tensorflow use the apache license 2 0 however when I download the c api from it come with a huge license file that list many 3rd party librarie which license do I need to distribute with my application do I need to add the apache license 2 0 for tensorflow or do I need to add the content of the license file that come with the c api
tensorflowtensorflow
segmentation fault when use cpp custom op in tf data dataset map in tensorflow2 0
Bug
it seem if I have cpp custom op in a python function and I pass the python function to tf datum dataset map it will crush if I only call this python function outside it will be ok I ve spend a whole afternoon to find the bug I m really mad about this bug have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 2 0b1 python version 3 6 cuda cudnn version 10 7 4 gpu model and memory 7 5 24 gb import tensorflow as tf import pdb extr module tf load op library build libextr module so re extr module test bug ok def aaa filename re extr module test bug segmentation fault core dump return tf zero 1 tf float32 dataset tf datum textlinedataset aaa map aaa include tensorflow core framework op kernel h include tensorflow core framework register type h include tensorflow core framework tensor h include tensorflow core framework tensor shape h include tensorflow core framework register type h include tensorflow core framework op h include tensorflow core framework shape inference h include tensorflow core util work sharder h include include use namespace tensorflow register op testbug output dummy float setshapefn tensorflow shape inference inferencecontext c c set output 0 c makeshape 1 return status ok class testbugop public opkernel public explicit testbugop opkernelconstruction context opkernel context void compute opkernelcontext context override tensor dummy null op require ok context context allocate output 0 1 dummy register kernel builder name testbug device device cpu testbugop cmake minimum require version 2 8 project extr module compiler flag set cmake cxx flag cmake cxx flag std c 11 o2 openmp cxx flag wall fpic d glibcxx use cxx11 abi 0 dgoogle cuda 1 tensorflow dependency execute process command python3 c import os os environ tf cpp min log level 3 import tensorflow as tf print tf sysconfig get include end flush true output variable tf inc execute process command python3 c import os os environ tf cpp min log level 3 import tensorflow as tf print tf sysconfig get lib end flush true output variable tf lib message status find tf inc tf inc message status find tf inc external tf inc external nsync public message status find tf lib tf lib include directory tf inc include directory tf inc external nsync public link directory tf lib add library extr module share testbug cc target link librarie extr module tensorflow framework
tensorflowtensorflow
tf 2 0 1 14 api docs tf keras callbacks break link to source
Bug
description tf keras callbacks tf r2 0 and r1 14 doc state that it s define in an init py but the link be break 404 there be no link to init py in tf 1 13 stable 1 14 too since it s just be release url s with the issue 404 r2 0 r1 14 link to the documentation entry r2 0 r1 14
tensorflowtensorflow
tf app flag implicit parse potentially cause crash with exception
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution linux fedora 30 tensorflow instal from binary tensorflow version 1 13 1 python version 3 7 3 cuda version 10 0 gpu model nvidia titan v black describe the current behavior the h help helpshort and helpfull argument may cause exception to be trigger as oppose to display flag default value and programmer provide information this seem to occur when the tf app flag abseil wrapper attempt to use implicit parsing describe the expect behavior when any value of h help helpshort helpfull be provide as argument to script use tf app flag the help list should appear code to reproduce the issue python mwe py display tensorflow help argument issue from tensorflow import app app flag define string myflag default help output flag app flag flag global string this string cause the abseil wrapper to begin process the 0 flag format flag myflag def main print this string cause tf to race the abseil parse process which kill the help menu use string s flag myflag if name main app run main other info log the below bash script use the above code to display the issue notably use abseil s app flag will not cause the exception to occur with the same source code bash python mwe py help se I s global string global string g mwe py python mwe py help this be relate to issue 28581 after elaborate on the issue and not receive a response I have open this new issue if further information be require please let I know how I may assist you
tensorflowtensorflow
input signature of a tf function decorator crash when use multiple gpu with mirroredstrategy
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 cento linux 7 6 1810 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip binary tensorflow version use command below tensorflow gpu 2 0 0 beta0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 130 7 6 0 gpu model and memory tesla p100 sxm2 16 gb describe the current behavior tensorflow crash when check the input signature of a tf function decorator when use multiple gpu in a mirroredstrategy a valueerror be generate cause a perreplica object can not be convert to a tensor see the log below below you can find the minimum code need to reproduce the error the code run just fine when I only utilize one gpu strategy tf distribute mirroredstrategy device gpu 0 furthermore if the optional argument input signature be discard only use tf function the error disappear too again use multiple gpu hence the specific combination of input signature and multiple gpu cause the problem which I need for performance reason in my work describe the expect behavior the code below win t generate any error code to reproduce the issue import tensorflow as tf import numpy as np strategy tf distribute mirroredstrategy device gpu 0 gpu 1 with strategy scope dataset tf datum dataset from tensor slice np one 100 12 astype np float32 dataset dataset batch 4 dataset strategy experimental distribute dataset dataset def compute input datum return tf reduce sum input datum 1 tf function input signature tf tensorspec none 12 tf float32 def distribute run input datum return strategy experimental run v2 compute args input datum for x in dataset output distribute run x print output other info log traceback most recent call last file datum gent gvo000 gvo00003 vsc41939 genius miniconda3 envs tensorflow2 lib python3 6 site package tensorflow python eager function py line 1216 in convert input to signature value dtype hint spec dtype file datum gent gvo000 gvo00003 vsc41939 genius miniconda3 envs tensorflow2 lib python3 6 site package tensorflow python framework op py line 1100 in convert to tensor return convert to tensor v2 value dtype prefer dtype name file datum gent gvo000 gvo00003 vsc41939 genius miniconda3 envs tensorflow2 lib python3 6 site package tensorflow python framework op py line 1158 in convert to tensor v2 as ref false file datum gent gvo000 gvo00003 vsc41939 genius miniconda3 envs tensorflow2 lib python3 6 site package tensorflow python framework op py line 1237 in internal convert to tensor ret conversion func value dtype dtype name name as ref as ref file datum gent gvo000 gvo00003 vsc41939 genius miniconda3 envs tensorflow2 lib python3 6 site package tensorflow python framework constant op py line 305 in constant tensor conversion function return constant v dtype dtype name name file datum gent gvo000 gvo00003 vsc41939 genius miniconda3 envs tensorflow2 lib python3 6 site package tensorflow python framework constant op py line 246 in constant allow broadcast true file datum gent gvo000 gvo00003 vsc41939 genius miniconda3 envs tensorflow2 lib python3 6 site package tensorflow python framework constant op py line 254 in constant impl t convert to eager tensor value ctx dtype file datum gent gvo000 gvo00003 vsc41939 genius miniconda3 envs tensorflow2 lib python3 6 site package tensorflow python framework constant op py line 115 in convert to eager tensor return op eagertensor value handle device dtype valueerror attempt to convert a value perreplica 0 job localhost replica 0 task 0 device gpu 0 array 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 dtype float32 1 job localhost replica 0 task 0 device gpu 1 array 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 dtype float32 with an unsupported type to a tensor during handling of the above exception another exception occur traceback most recent call last file issue2 py line 19 in output distribute run x file datum gent gvo000 gvo00003 vsc41939 genius miniconda3 envs tensorflow2 lib python3 6 site package tensorflow python eager def function py line 432 in call args kwd file datum gent gvo000 gvo00003 vsc41939 genius miniconda3 envs tensorflow2 lib python3 6 site package tensorflow python eager function py line 1169 in canonicalize function input self flat input signature file datum gent gvo000 gvo00003 vsc41939 genius miniconda3 envs tensorflow2 lib python3 6 site package tensorflow python eager function py line 1222 in convert input to signature str input str input signature valueerror when input signature be provide all input to the python function must be convertible to tensor input perreplica 0 job localhost replica 0 task 0 device gpu 0 array 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 dtype float32 1 job localhost replica 0 task 0 device gpu 1 array 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 dtype float32 input signature tensorspec shape none 12 dtype tf float32 name none
tensorflowtensorflow
android performance cpu affinity
Bug
system information mobile device xiaomi a2 8 core tensorflow instal from source or binary source tensorflow version use command below master at 6cf83ea 2019 06 17 describe the current behavior the benchmark model that we run on the device perform very differently when we choose cpu affinity e g average inference time 32 bit cpu only for our custom tflite model be 172ms for taskset ff 143ms for taskset f0 and 281ms taskset 0f we use num thread 8 I e much more thread that we have core available when we increase num thread 12 inference time far decrease the change of min time be even more impressive 125m for ff 114ms for f0 and 211ms for 0f and 105ms for f0 with 12 thread I e 3 thread per cpu far increase num thread do not deliver visible improvement note that we have no control on the actual number of thread use by interpreter invoke and their cpu affinity describe the expect behavior I would expect the interpreter to choose the optimal threading model the document reduce variance between run on android read reduce variance between run on android when run benchmark on these phone there can be significant variance between different run of the benchmark while benchmark be nice our real necessity be to run tflite optimally in our app but we can not control thread cpu affinity or the interpreter but even if we could setup the interpreter thread configuration beyond the generic set the number of thread available to the interpreter across multiple device choose the optimal taskset be beyond the capability of most development team this can not be do by analyze the cpuinfo on our a2 development phone all 8 core be declare almost issuecomment 503589179 equal processor aarch64 processor rev 4 aarch64 processor 0 bogomip 38 40 feature fp asimd evtstrm aes pmull sha1 sha2 crc32 cpu implementer 0x51 cpu architecture 8 cpu variant 0xa cpu part 0x801 cpu revision 4 processor 1 bogomip 38 40 feature fp asimd evtstrm aes pmull sha1 sha2 crc32 cpu implementer 0x51 cpu architecture 8 cpu variant 0xa cpu part 0x801 cpu revision 4 processor 2 bogomip 38 40 feature fp asimd evtstrm aes pmull sha1 sha2 crc32 cpu implementer 0x51 cpu architecture 8 cpu variant 0xa cpu part 0x801 cpu revision 4 processor 3 bogomip 38 40 feature fp asimd evtstrm aes pmull sha1 sha2 crc32 cpu implementer 0x51 cpu architecture 8 cpu variant 0xa cpu part 0x801 cpu revision 4 processor 4 bogomip 38 40 feature fp asimd evtstrm aes pmull sha1 sha2 crc32 cpu implementer 0x51 cpu architecture 8 cpu variant 0xa cpu part 0x800 cpu revision 2 processor 5 bogomip 38 40 feature fp asimd evtstrm aes pmull sha1 sha2 crc32 cpu implementer 0x51 cpu architecture 8 cpu variant 0xa cpu part 0x800 cpu revision 2 processor 6 bogomip 38 40 feature fp asimd evtstrm aes pmull sha1 sha2 crc32 cpu implementer 0x51 cpu architecture 8 cpu variant 0xa cpu part 0x800 cpu revision 2 processor 7 bogomip 38 40 feature fp asimd evtstrm aes pmull sha1 sha2 crc32 cpu implementer 0x51 cpu architecture 8 cpu variant 0xa cpu part 0x800 cpu revision 2 hardware qualcomm technology inc sdm660 maybe I be miss something but I see here no clue that the performance of f0 will be so much different from 0f the dynamic optimization should be perform once in a while because the load on the device may change either in the same process or because of other process app run along our app it could be useful to persist these finding so that next time interpreter start it could have a reasonable starting point I be not sure if this info be relevant for specific tflite model or for any model in our experiment the we only use similar fcnn network and their performance be effect by taskset just the same
tensorflowtensorflow
need example of mixed precision in eager execution mode
Bug
I have try all of they none of they work for I in eager mode could you provide some example
tensorflowtensorflow
tf 2 0 api docs tf set difference
Bug
system information tensorflow version 2 0 url s with the issue description of issue what need change raise list and define raise not list and define every method have a way that it can be mishandle maybe when a wrong parameter or wrong order of parameter be be pass in e g in this case two set a and b in which the last element don t match will raise an error request visual if applicable no visual submit a pull request no
tensorflowtensorflow
tf 2 0 api docs tf keras backend count param
Bug
system information tensorflow version 2 9 url s with the issue description of issue what need change clear description the description be not clear no detail on how to use this symbol raise list and define no error have be define request visual if applicable no visual an example use array could be represent in visual form for clarification submit a pull request no
tensorflowtensorflow
2 0b
Bug
url s with the issue description of issue what need change need to be completely redo clear description everything be wrong and example do not work
tensorflowtensorflow
sequencefeature layer require a sparsetensor only to convert it to a regular tensor
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos 10 13 6 tensorflow instal from source or binary from pip install tensorflow version use command below v1 12 1 3259 gf59745a381 2 0 0 beta0 python version v3 6 7 6ec5cf24b7 oct 20 2018 03 02 14 describe the current behavior when I call a sequencefeature layer on a dense tensor like a tensor produce with numpy a typeerror be raise because sequencefeature call method expect a sparsetensor as input when look at the log it appear that the function produce the typeerror be sparse tensor to dense so sequencefeature do expect a sparsetensor only to convert it to a dense tensor and fail if give a tensor that be already dense describe the expect behavior I think sequencefeature call should accept dense tensor object and just don t try to convert they from sparse to dense code to reproduce the issue import numpy as np import tensorflow as tf from tensorflow feature column import sequence numeric column from tensorflow keras experimental import sequencefeature print tf version git version tf version version seq fc sequence numeric column feature1 seq layer sequencefeature seq fc batch size sequence length 3 5 raw datum np array range batch size sequence length dtype np float32 datum np reshape raw data batch size sequence length dict datum feature1 datum seq layer dict datum this produce the follow error log typeerror traceback most recent call last in 1 dict datum feature1 datum 2 seq layer dict datum document tf2beta lib python3 6 site package tensorflow python keras engine base layer py in call self input args kwargs 710 with base layer util autocast context manager 711 input list self mix precision policy should cast variable 712 output self call input args kwargs 713 self handle activity regularization input output 714 self set mask metadata input output input mask document tf2beta lib python3 6 site package tensorflow python feature column sequence feature column py in call self feature 138 with op name scope column name 139 dense tensor sequence length column get sequence dense tensor 140 transformation cache self state manager 141 flatten the final dimension to produce a 3d tensor 142 output tensor append self process dense tensor column dense tensor document tf2beta lib python3 6 site package tensorflow python feature column sequence feature column py in get sequence dense tensor self transformation cache state manager 553 sp tensor transformation cache get self state manager 554 dense tensor sparse op sparse tensor to dense 555 sp tensor default value self default value 556 reshape into batch size t variable shape 557 dense shape array op concat document tf2beta lib python3 6 site package tensorflow python op sparse ops py in sparse tensor to dense sp input default value validate index name 1447 typeerror if sp input be not a sparsetensor 1448 1449 sp input convert to sparse tensor sp input 1450 1451 return gen sparse op sparse to dense document tf2beta lib python3 6 site package tensorflow python op sparse op py in convert to sparse tensor sp input 66 return sparse tensor sparsetensor from value sp input 67 if not isinstance sp input sparse tensor sparsetensor 68 raise typeerror input must be a sparsetensor 69 return sp input 70 typeerror input must be a sparsetensor workaround convert the input to sparse before call sequencefeature do make the code work import numpy as np import tensorflow as tf from tensorflow feature column import sequence numeric column from tensorflow keras experimental import sequencefeature print tf version git version tf version version seq fc sequence numeric column feature1 seq layer sequencefeature seq fc batch size sequence length 3 5 raw datum np array range batch size sequence length dtype np float32 datum np reshape raw data batch size sequence length convert the input tensor in a sparse tensor zero tf constant 0 dtype tf float32 where tf not equal datum zero indice tf where where value tf gather nd datum indice sparse tf sparsetensor index value datum shape dict datum but sparse feature1 sparse seq layer dict datum but sparse correctly output the follow
tensorflowtensorflow
tfrecord guide doesn t show how to serialize and parse tensor
Bug
url s with the issue please provide a link to the documentation entry for example create a tfexample message description of issue what need change the tfrecord docs don t show how to serialize and parse tensor to a tfrecord it s a minimal example with a bunch of string integer and float how do you include tensor feature in a tf train example the whole tf train tf io tf datum thing feel scatter and full of unnecessary boilerplate why be example feature feature in train but then then we have to write a bunch of boilerplate tf io function like float value byte value to make the actual feature all I want to do be make a tfrecord with a bunch of tensor of different shape in each entry clear description all of this stuff should be simplify and put into tf datum tf io feel pointless since tf datum be mean to do the same thing why can t the tensorflow guide to tfrecord tell we how to write tensor datum to a tfrecord and read it with tfdata why do we need to write so much boilerplate to add feature to tf train example why be datum io spread out over four separate module train io dtype and datum
tensorflowtensorflow
tf 2 0 upgrade script unable to handle the operator for matrix multiplication
Bug
system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 beta1 python version 3 7 3 describe the current behavior run the tf upgrade v2 script with a file contain the operator result in an exception see below code to reproduce the issue use the file tmp py with the follow content import numpy as np def mul a b z a b then run tf upgrade v2 exe infile c any path tmp py outfile c another path tmp py other info log traceback most recent call last file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 1194 in visit super astannotator self visit node file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 132 in visit super basevisitor self visit node file c program file python37 lib ast py line 262 in visit return visitor node file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 47 in wrap f self node args kwargs file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 690 in visit binop op symbol ast constant node type to tokens type node op 0 keyerror during handling of the above exception another exception occur traceback most recent call last file c program file python37 lib runpy py line 193 in run module as main main mod spec file c program file python37 lib runpy py line 85 in run code exec code run global file c user pzobel pycharmproject tf2 0 beta test venv script tf upgrade v2 exe main py line 9 in file c user pzobel pycharmproject tf2 0 beta test venv lib site package tensorflow tool compatibility tf upgrade v2 main py line 139 in main args input file output file upgrade file c user pzobel pycharmproject tf2 0 beta test venv lib site package tensorflow tool compatibility tf upgrade v2 main py line 40 in process file upgrader process file in filename out filename file c user pzobel pycharmproject tf2 0 beta test venv lib site package tensorflow tool compatibility ast edit py line 900 in process file temp file file c user pzobel pycharmproject tf2 0 beta test venv lib site package tensorflow tool compatibility ast edit py line 960 in process open file self update string pasta join line in filename file c user pzobel pycharmproject tf2 0 beta test venv lib site package tensorflow tool compatibility ast edit py line 916 in update string pasta t pasta parse text file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta init py line 25 in parse annotator visit t file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 1194 in visit super astannotator self visit node file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 132 in visit super basevisitor self visit node file c program file python37 lib ast py line 262 in visit return visitor node file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 47 in wrap f self node args kwargs file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 220 in visit module self generic visit node file c program file python37 lib ast py line 270 in generic visit self visit item file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 1194 in visit super astannotator self visit node file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 132 in visit super basevisitor self visit node file c program file python37 lib ast py line 262 in visit return visitor node file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 95 in wrap f self node args kwargs file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 411 in visit functiondef self visit stmt file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 1194 in visit super astannotator self visit node file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 132 in visit super basevisitor self visit node file c program file python37 lib ast py line 262 in visit return visitor node file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 47 in wrap f self node args kwargs file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 530 in visit assign self visit node value file c user pzobel pycharmproject tf2 0 beta test venv lib site package pasta base annotate py line 1196 in visit raise annotationerror e pasta base annotate annotationerror
tensorflowtensorflow
bug in tf einsum return different value from np einsum for identical parameter
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary binary anaconda tensorflow version use command below 1 13 1 python version 3 7 3 cuda cudnn version 10 0 7 6 0 gpu model and memory nvidia titan x pascal 12 gb describe the current behavior tf einsum return buggy value compare to np einsum for identical parameter here be the current output please find code below np 8 610251426696777 tf 0 0 please also note that the same code work as expect when run on the cpu it also work correctly on the gpu in tf 1 12 0 cuda cudnn 9 0 7 3 1 and python 3 6 8 describe the expect behavior here be the expect output np 8 610251426696777 tf 8 610251426696777 code to reproduce the issue from future import absolute import division print function import tensorflow as tf import numpy as np tf enable eager execution einsum string bijk bijk bij load value s h numpy np load s h npy s h tf convert to tensor s h numpy dtype tf float32 h numpy np load h npy h tf convert to tensor h numpy dtype tf float32 perform einsum ht s h numpy np einsum einsum string h numpy s h numpy ht s h tf einsum einsum string h s h print np np val ntf tf val format np val ht s h numpy 2 191 191 tf val ht s h 2 191 191 other info log here be the input tensor use in the example input tensor zip the tensor be of shape 4 192 192 2 I could not choose a small example because the buggy value appear only in part of the second and all of the follow batch item the bug therefore seem to be connect to the size of the input
tensorflowtensorflow
typo in tensorflow core 2 0b s advanced tutorial loading datum build an image input pipeline
Bug
url s with the issue load and format the image description of issue what need change under load and format the image section the original code below fail to run since it use a variable img path which should be image path instead python import matplotlib pyplot as plt image path all image path 0 label all image label 0 plt imshow load and preprocess image img path plt grid false plt xlabel caption image img path plt title label name label title print clear description this be a simple typo and need to be fix correct link not relate parameter define not relate return define not relate raise list and define not relate usage example not relate request visual if applicable not relate submit a pull request not this time
tensorflowtensorflow
tf range with tf constant int32 limit and dtype tf float32 fail
Bug
tensorflow version tf nightly gpu 2 0 preview 2 0 0 dev20190611 or cpu equivalent linux also test on tf nightly 2 0 preview 2 0 0 dev20190607 window try run the follow two python tf range tf constant 102 dtype tf float32 fail tf range 102 dtype tf float32 work the first one fail with valueerror traceback most recent call last in 1 tf range tf constant 102 dtype tf float32 c progam miniconda envs tf2 preview cpu lib site package tensorflow python op math ops py in range start limit delta dtype name 1317 with op name scope name range start limit delta as name 1318 start op convert to tensor start dtype dtype name start 1319 limit op convert to tensor limit dtype dtype name limit 1320 delta op convert to tensor delta dtype dtype name delta 1321 c progam miniconda envs tf2 preview cpu lib site package tensorflow python framework op py in convert to tensor value dtype name prefer dtype dtype hint 1098 prefer dtype deprecation deprecate argument lookup 1099 dtype hint dtype hint prefer dtype prefer dtype 1100 return convert to tensor v2 value dtype prefer dtype name 1101 1102 c progam miniconda envs tf2 preview cpu lib site package tensorflow python framework op py in convert to tensor v2 value dtype dtype hint name 1156 name name 1157 prefer dtype dtype hint 1158 as ref false 1159 1160 c progam miniconda envs tf2 preview cpu lib site package tensorflow python framework op py in internal convert to tensor value dtype name as ref prefer dtype ctx accept symbolic tensor accept composite tensor 1178 if dtype be not none 1179 dtype dtype as dtype dtype 1180 value tensortensorconversionfunction value dtype dtype 1181 return value 1182 else c progam miniconda envs tf2 preview cpu lib site package tensorflow python framework op py in tensortensorconversionfunction t dtype name as ref 1034 raise valueerror 1035 tensor conversion request dtype s for tensor with dtype s r 1036 dtype name t dtype name str t 1037 return t 1038 tensor conversion registry register tensor conversion function valueerror tensor conversion request dtype float32 for tensor with dtype int32 tf tensor 102 shape dtype int32
tensorflowtensorflow
keras fit with mixed dataset ndarray datum result in batch size arg error
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 macos 10 14 5 tensorflow instal from source or binary binary tensorflow version use command below v1 14 0 rc0 55 g1b365ca304 1 14 0 rc1 python version 3 6 8 3 7 2 cuda cudnn version none gpu model and memory none describe the current behavior combine tf datum dataset with tf keras model fail since 1 14 0rc0 when use a dataset for training but a numpy array for validation datum log with error for both 1 14 and 1 13 below describe the expect behavior be able to use a dataset only for training but a numpy array for validation datum code to reproduce the issue python import tensorflow as tf import numpy as np datum np random randn 1000 10 target np random randn 1000 1 ds tf datum dataset from tensor slice datum batch 32 model tf keras sequential model add tf keras layer dense 32 input dim 10 model add tf keras layer dense 32 model add tf keras layer dense 1 model compile optimizer rmsprop loss mse model fit ds step per epoch 10 validation datum datum target batch size 32 other info log behaviour use 1 14 0rc1 without batch size 32 set bash python bug py 1 10 eta 1s loss 2 3122 traceback most recent call last file tensorflow bug py line 14 in model fit ds step per epoch 10 validation datum datum target file user ahoereth repos autoecg venv lib python3 7 site package tensorflow python keras engine training py line 780 in fit step name step per epoch file user ahoereth repos autoecg venv lib python3 7 site package tensorflow python keras engine training array py line 409 in model iteration step name validation step file user ahoereth repos autoecg venv lib python3 7 site package tensorflow python keras engine training array py line 335 in model iteration batch make batch num sample or step batch size file user ahoereth repos autoecg venv lib python3 7 site package tensorflow python keras util generic util py line 493 in make batch num batch int np ceil size float batch size typeerror float argument must be a string or a number not nonetype with batch size 32 set bash python tensorflow bug py traceback most recent call last file tensorflow bug py line 14 in model fit ds step per epoch 10 validation datum datum target batch size 32 file user ahoereth repos autoecg venv lib python3 7 site package tensorflow python keras engine training py line 652 in fit batch size step per epoch x file user ahoereth repos autoecg venv lib python3 7 site package tensorflow python keras engine training py line 1873 in validate or infer batch size raise valueerror the batch size argument must not be specify when valueerror the batch size argument must not be specify when use dataset as an input behaviour use 1 13 1 without batch size 32 set bash python bug py 1 10 eta 2s loss 3 0703 traceback most recent call last file tensorflow bug py line 14 in model fit ds step per epoch 10 validation datum datum target file user ahoereth repos autoecg venv lib python3 7 site package tensorflow python keras engine training py line 880 in fit validation step validation step file user ahoereth repos autoecg venv lib python3 7 site package tensorflow python keras engine training array py line 364 in model iteration validation in fit true file user ahoereth repos autoecg venv lib python3 7 site package tensorflow python keras engine training array py line 301 in model iteration batch make batch num sample or step batch size file user ahoereth repos autoecg venv lib python3 7 site package tensorflow python keras util generic util py line 488 in make batch num batch int np ceil size float batch size typeerror float argument must be a string or a number not nonetype with batch size 32 set bash python bug py 10 10 0s 21ms step loss 1 0369 val loss 1 1500
tensorflowtensorflow
tflite invoke function crash
Bug
I come cross a strange issue only occur on huawei phone for phone infomation with this image image at first time run inference no crash it be always crash at second time below be crash info image below be process crash info with ndk stack but unable to locate in tensorflow source as build tflite from source with no debug symbol and I do not know how to build with debug symbol image I have try many method for build tflite with debug symbol but not successe for example 1 bazel build c dbg strip never compilation mode dbg per file copt tensorflow lite cc g o0 tensorflow lite libtensorflowlite so crosstool top external android crosstool cpu armeabi v7a host crosstool top bazel tool tool cpp toolchain cxxopt std c 11 with c dbg strip never compilation mode dbg image crash occur at 62 line it crash at second time only on huawei phone other phone and io have no crash ps this issue finally crash at 277 line with below image image I guess the bias datum address be unavaible image
tensorflowtensorflow
error when use unique with count function
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template on window subsystem for linux python 3 6 7 tensorflow 1 13 1 code x train x test y train y test train test split x y test size 0 5 random state 0 print x train shape print x test shape x tf placeholder float shape x train shape y tf placeholder float shape x test shape 1 computel0dist tf count nonzero x y axis 1 find k close tr product tf contrib framework argsort computel0dist direction ascend find label k close tr product tf gather y train find k close tr product 0 paramk print shape find label k close tr product shape find u label find idex find count tf unique with count find label k close tr product find predict label tf gather find u label tf argmax find count error 49 1611 49 1611 warn the tensorflow contrib module will not be include in tensorflow 2 0 for more information please see if you depend on functionality not list there please file an issue 2019 06 17 11 51 40 240971 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 06 17 11 51 40 248082 I tensorflow core platform profile util cpu util cc 94 cpu frequency 1800000000 hz 2019 06 17 11 51 40 250639 I tensorflow compiler xla service service cc 150 xla service 0x377c490 execute computation on platform host device 2019 06 17 11 51 40 252480 I tensorflow compiler xla service service cc 158 streamexecutor device 0 traceback most recent call last file home psharma local lib python3 6 site package tensorflow python client session py line 1334 in do call return fn args file home psharma local lib python3 6 site package tensorflow python client session py line 1317 in run fn self extend graph file home psharma local lib python3 6 site package tensorflow python client session py line 1352 in extend graph tf session extendsession self session tensorflow python framework error impl invalidargumenterror no opkernel be register to support op uniquewithcount use by node uniquewithcount with these attrs t dt bool out idx dt int32 register device cpu xla cpu register kernel device cpu t in dt string out idx in dt int64 device cpu t in dt string out idx in dt int32 device cpu t in dt double out idx in dt int64 device cpu t in dt double out idx in dt int32 device cpu t in dt float out idx in dt int64 device cpu t in dt float out idx in dt int32 device cpu t in dt bfloat16 out idx in dt int64 device cpu t in dt bfloat16 out idx in dt int32 device cpu t in dt half out idx in dt int64 device cpu t in dt half out idx in dt int32 device cpu t in dt int8 out idx in dt int64 device cpu t in dt int8 out idx in dt int32 device cpu t in dt uint8 out idx in dt int64 device cpu t in dt uint8 out idx in dt int32 device cpu t in dt int16 out idx in dt int64 device cpu t in dt int16 out idx in dt int32 device cpu t in dt uint16 out idx in dt int64 device cpu t in dt uint16 out idx in dt int32 device cpu t in dt int32 out idx in dt int64 device cpu t in dt int32 out idx in dt int32 device cpu t in dt int64 out idx in dt int64 device cpu t in dt int64 out idx in dt int32 node uniquewithcount during handling of the above exception another exception occur traceback most recent call last file knn py line 101 in predict label sess run find predict label feed dict x x train y x test I te p file home psharma local lib python3 6 site package tensorflow python client session py line 929 in run run metadata ptr file home psharma local lib python3 6 site package tensorflow python client session py line 1152 in run feed dict tensor option run metadata file home psharma local lib python3 6 site package tensorflow python client session py line 1328 in do run run metadata file home psharma local lib python3 6 site package tensorflow python client session py line 1348 in do call raise type e node def op message tensorflow python framework error impl invalidargumenterror no opkernel be register to support op uniquewithcount use by node uniquewithcount define at knn py 91 with these attrs t dt bool out idx dt int32 register device cpu xla cpu register kernel device cpu t in dt string out idx in dt int64 device cpu t in dt string out idx in dt int32 device cpu t in dt double out idx in dt int64 device cpu t in dt double out idx in dt int32 device cpu t in dt float out idx in dt int64 device cpu t in dt float out idx in dt int32 device cpu t in dt bfloat16 out idx in dt int64 device cpu t in dt bfloat16 out idx in dt int32 device cpu t in dt half out idx in dt int64 device cpu t in dt half out idx in dt int32 device cpu t in dt int8 out idx in dt int64 device cpu t in dt int8 out idx in dt int32 device cpu t in dt uint8 out idx in dt int64 device cpu t in dt uint8 out idx in dt int32 device cpu t in dt int16 out idx in dt int64 device cpu t in dt int16 out idx in dt int32 device cpu t in dt uint16 out idx in dt int64 device cpu t in dt uint16 out idx in dt int32 device cpu t in dt int32 out idx in dt int64 device cpu t in dt int32 out idx in dt int32 device cpu t in dt int64 out idx in dt int64 device cpu t in dt int64 out idx in dt int32 node uniquewithcount define at knn py 91 cause by op uniquewithcount define at file knn py line 91 in find u label find idex find count tf unique with count find label k close tr product file home psharma local lib python3 6 site package tensorflow python op array op py line 1450 in unique with count return gen array op unique with count x out idx name file home psharma local lib python3 6 site package tensorflow python ops gen array op py line 10543 in unique with count uniquewithcount x x out idx out idx name name file home psharma local lib python3 6 site package tensorflow python framework op def library py line 788 in apply op helper op def op def file home psharma local lib python3 6 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file home psharma local lib python3 6 site package tensorflow python framework op py line 3300 in create op op def op def file home psharma local lib python3 6 site package tensorflow python framework op py line 1801 in init self traceback tf stack extract stack invalidargumenterror see above for traceback no opkernel be register to support op uniquewithcount use by node uniquewithcount define at knn py 91 with these attrs t dt bool out idx dt int32 register device cpu xla cpu register kernel device cpu t in dt string out idx in dt int64 device cpu t in dt string out idx in dt int32 device cpu t in dt double out idx in dt int64 device cpu t in dt double out idx in dt int32 device cpu t in dt float out idx in dt int64 device cpu t in dt float out idx in dt int32 device cpu t in dt bfloat16 out idx in dt int64 device cpu t in dt bfloat16 out idx in dt int32 device cpu t in dt half out idx in dt int64 device cpu t in dt half out idx in dt int32 device cpu t in dt int8 out idx in dt int64 device cpu t in dt int8 out idx in dt int32 device cpu t in dt uint8 out idx in dt int64 device cpu t in dt uint8 out idx in dt int32 device cpu t in dt int16 out idx in dt int64 device cpu t in dt int16 out idx in dt int32 device cpu t in dt uint16 out idx in dt int64 device cpu t in dt uint16 out idx in dt int32 device cpu t in dt int32 out idx in dt int64 device cpu t in dt int32 out idx in dt int32 device cpu t in dt int64 out idx in dt int64 device cpu t in dt int64 out idx in dt int32 node uniquewithcount define at knn py 91 system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf 2 0 api docs tf keras layers gru
Bug
url s with the issue property clear description initialise float variable use 0 rather than 0 0 dropout 0 recurrent dropout 0 parameter define time major argument not document raise list and define no usage example no
tensorflowtensorflow
tf 2 0 api docs tf error deadlineexceedederror
Bug
system information tensorflow version 2 0 url s with the issue description of issue what need change clear description the description could be well need more content for clarification usage example no usage example define submit a pull request no
tensorflowtensorflow
tf 2 0 api doc tf dynamic stitch
Bug
system information tensorflow version 2 0 url s with the issue description of issue what need change raise list and define no error have be define submit a pull request
tensorflowtensorflow
tf 2 0 api doc
Bug
1 the parameter pool function and input spec in l59 where not define the documentation and no return where define
tensorflowtensorflow
tf 2 0 api docs tf div no nan
Bug
url s with the issue description of issue what need change the web page correspond to the link do not exist error 404 correct link the link be not correct the page doesn t exist
tensorflowtensorflow
tf 2 0 api doc documentation describe non exist symbol
Bug
this link give a description of a symbol in a module which be suppose to be at but the module doesn t seem to exist
tensorflowtensorflow
tf 2 0 api docs tf datum experimental unbatch
Bug
system information tensorflow version 2 0 url s with the issue description of issue what need change the description need more content as no description have be provide for the available function no recommendation have be give on when and when not to use this symbol parameter define no parameter have be define raise list and define no error define request visual if applicable no visual the content will be clarify if visual be provide other this symbol be deprecate be not sure if this review will still be useful submit a pull request no
tensorflowtensorflow
tf 2 0 api docs tf datum experimental rejection resample
Bug
system information tensorflow version 2 0 url s with the issue description of issue what need change no recommendation of when and when not to use this symbol have be provide the description be not clear it need more content raise list and define no raise list usage example no usage example have be provide request visual if applicable no visual but they will be very useful if present submit a pull request no
tensorflowtensorflow
tf 2 0 api doc tf debugging assert type
Bug
url with the issue description of issue what need change the api symbol doesn t contain a complete self contain coherent appropriately format and well document usage example code sample and the return value aren t define usage example no usage example return define the return value aren t define
tensorflowtensorflow
tf 2 0 api docs tf datum experimental prefetch to device
Bug
url with the issue description of issue what need change the api symbol doesn t contain a complete self contain coherent appropriately format and well document code sample usage example no usage example
tensorflowtensorflow
tf 2 0 api docs tf error resourceexhaustederror
Bug
url with the issue description of issue what need change the api symbol doesn t contain a complete self contain coherent appropriately format and well document code sample usage example no usage example
tensorflowtensorflow
tf 2 0 api docs tf error resourceexhaustederror
Bug
url with the issue description of issue what need change the api symbol doesn t contain a complete self contain coherent appropriately format and well document code sample usage example no usage example
tensorflowtensorflow
tf image decode and crop jpeg parameter crop window change
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue documentation of parameter type crop window clear description the type of parameter be list of int and in the documentation it say crop window a tensor of type int32
tensorflowtensorflow
beta 1 tf keras layer conv2d with dilation rate 1 return tensor with shape none
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow gpu 2 0 0 beta1 colab tensorflow version use command below tensorflow gpu 2 0 0 beta1 python version pip3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version colab gpu model and memory describe the current behavior if tf keras layer conv2d be use with dilation rate param 1 it return tensor with shape none
tensorflowtensorflow
tpu train on batch stride size error
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian 9 tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 3 5 gpu model and memory tpu v3 8 describe the current behavior code and datum which run fine on cpu throw error on tpu this only happen if I use train on batch instead of fit I have 2 version of same model one be with fit with 2 loop and with train on batch with 3 loop epoch day worth of datum batch within day train on batch throw error slice index 0 of dimension 0 out of bound for stride slice op stridedslice with input shape 0 1 1 1 and with compute input tensor input 1 0 input 2 1 input 3 1 0111 datum be label provide in y2 and the size of 4 be correct why be compute input tensor be size 3 I don t understand it look like a bug very much model model tf keras sequential model add layers lstm neuron input shape window size input n return sequence true model add layer lstm neuron model add layer dense output n activation sigmoid opt tf train adamoptimizer 0 001 model compile optimizer opt loss categorical crossentropy metric categorical accuracy tpu model tf contrib tpu keras to tpu model model strategy tf contrib tpu tpudistributionstrategy tf contrib cluster resolver tpuclusterresolver tpu tpu address1 shape layer type output shape param lstm input inputlayer none 1024 7 0 lstm lstm none 1024 128 69632 lstm 1 lstm none 128 131584 dense dense none 4 516 training for epoch in epoch for d in day get array for the day feature np asarray d 1 2 9 astype dtype float32 label np asarray d 1 9 13 astype dtype int32 x y split sequence feature label buy window size train for slide in range window size try x1 y1 x slide y slide x2 y2 x1 reshape 1 1024 7 y1 reshape 1 4 h tpu model train on batch x2 y2 except exception as e print train exception e continue describe the expect behavior train on batch train without exception
tensorflowtensorflow
gpu support share object exclusion
Bug
update this issue be about maven artifact organization dependency management and documentation system information os platform and distribution linux ubuntu 18 04 2 lts tensorflow instal from maven version 1 13 1 tensorflow version 1 13 1 python version 3 6 7 instal use virtualenv cuda cudnn version nvidia smi 418 43 driver version 418 43 cuda version 10 1 gpu model and memory 4xnvidia 1080ti describe the problem I have create a tensorflow model in python and save it to the disk use the standard method builder tf save model builder savedmodelbuilder model directory builder add meta graph and variable sess tf save model tag constant serve builder save false next I be load the model in java savedmodelbundle savedmodelbundle savedmodelbundle loader modeldir withtag serve withconfigproto configproto newbuilder setgpuoption gpuoption newbuild setperprocessgpumemoryfraction 1 0 build setlogdeviceplacement true build tobytearray load session session savedmodelbundle session in my pom xml I have org tensorflow tensorflow tensorflow version org tensorflow proto tensorflow version org tensorflow libtensorflow jni gpu tensorflow version however this fail to use my gpu s on startup 2019 06 15 23 48 38 130731 I tensorflow compiler xla service service cc 150 xla service 0x7fbfc4656e10 execute computation on platform host device 2019 06 15 23 48 38 130768 I tensorflow compiler xla service service cc 158 streamexecutor device 0 device mapping job localhost replica 0 task 0 device xla cpu 0 device xla cpu device 2019 06 15 23 48 38 131168 I tensorflow core common runtime direct session cc 317 device mapping job localhost replica 0 task 0 device xla cpu 0 device xla cpu device please advise on how to use gpu with savemodelbundle
tensorflowtensorflow
tf reduce max return wrong value
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 2 lts tensorflow instal from source or binary source tensorflow version use command below 1 12 2 python version 3 6 7 bazel version if compile from source 0 17 2 gcc compiler version if compile from source gcc version 7 3 0 ubuntu 7 3 0 27ubuntu1 18 04 cuda cudnn version no gpu gpu model and memory no gpu the current behavior I have implement a computation graph consist of a custom keras layer gaussiansimilaritieslayer and a tf reduce max function you can find the code below import tensorflow as tf import numpy as np class gaussiansimilaritieslayer tf keras layers layer def init self reference value covariance matrix super gaussiansimilaritieslayer self init self reference value tf convert to tensor np vstack reference value astype np float32 self cov inv tf convert to tensor covariance matrix astype np float32 def call self input diff self reference value input a tf matmul diff self cov inv b tf multiply a diff dist tf reduce sum b axis 1 exp arg 0 5 dist return 1 tf math exp exp arg call return desire value return tf math exp exp arg call return wrong value class potential def init self session demonstration covariance matrix self in tf keras layers input shape 3 similaritie gaussiansimilaritieslayer demonstration covariance matrix self in max similarity tf keras layers lambda tf reduce max similaritie self model tf keras model input self in output max similarity self session session def call self s return self model output eval session self session feed dict self in s if name main with tf session as sess sa demonstration np array 1 2 3 dtype np float32 np array 4 5 6 dtype np float32 covariance matrix np array 1 0 0 0 2 0 0 0 3 dtype np float32 phi potential sess sa demonstration covariance matrix sample s np array 1 2 2 7 dtype np float32 print phi sample s when gaussiansimilaritieslayer call return statement look like below return tf math exp exp arg the script output 0 13499996 this be the value of exp arg from gaussiansimilaritieslayer call the function should return e 0 13499996 the expect behavior when gaussiansimilaritieslayer call return statement look like below return 1 tf math exp exp arg the script output 0 87371594 which be the desire value
tensorflowtensorflow
tf 2 0 api docs tf estimator sessionrunhook
Bug
url s with the issue description of issue what need change clear description yes clear description correct link the link to the source code correct parameter define all parameter define and format correctly return define return value be define raise list and define no raise list and define usage example no usage example provide request visual if applicable no visual include
tensorflowtensorflow
tf 2 0 api docs tf estimator stepcounterhook
Bug
url s with the issue description of issue what need change clear description yes clear description correct link the link to the source code correct parameter define all parameter define and format correctly return define return value be define raise list and define no raise list and define usage example no usage example provide request visual if applicable no visual include
tensorflowtensorflow
have problem to use the reshape in keras
Bug
system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 beta0 python version 3 6 8 cuda cudnn version 10 0 gpu model and memory geforce gtx 750 ti describe the current behavior when use reshape layer of keras and 1 be use to infer the shape the output shape be incorrect describe the expect behavior code to reproduce the issue from tensorflow python keras layers import input lambda conv2d reshape timedistribute from tensorflow python keras model import import tensorflow kera as keras a input shape 12 dtype int32 a shape none 12 d reshape 1 2 2 a actual result d shape none none 2 2 expect result d shape none 3 2 2
tensorflowtensorflow
tensorflow lite undefined reference to flatbuffer classiclocale instance
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary source tensorflow version r1 13 python version 3 5 instal use virtualenv pip conda virtualenv bazel version if compile from source n a gcc compiler version if compile from source 7 4 0 cuda cudnn version n a gpu model and memory n a describe the problem I be follow the instruction to crosscompile tensorflow lite for my raspberry pi I get the error message show the undefined reference to flatbuffer I believe all the dependency for lite be download by the download dependency sh script but I do not have bazel instal be bazel necessary for compile tensorflow lite any other info log home user gitrepo tensorflow tensorflow lite tool make gen rpi armv7l lib libtensorflow lite a while o in function tflite op custom while kernel init tflitecontext char const unsigned int while cc text 0x1648 undefined reference to flatbuffer classiclocale instance home user gitrepo tensorflow tensorflow lite tool make gen rpi armv7l lib libtensorflow lite a audio spectrogram o in function tflite op custom audio spectrogram init tflitecontext char const unsigned int audio spectrogram cc text 0xe0c undefined reference to flatbuffer classiclocale instance home user gitrepo tensorflow tensorflow lite tool make gen rpi armv7l lib libtensorflow lite a detection postprocess o in function tflite op custom detection postprocess init tflitecontext char const unsigned int detection postprocess cc text 0x211c undefined reference to flatbuffer classiclocale instance home user gitrepo tensorflow tensorflow lite tool make gen rpi armv7l lib libtensorflow lite a detection postprocess o in function flexbuffer reference asint64 const detection postprocess cc text znk11flexbuffers9reference7asint64ev znk11flexbuffers9reference7asint64ev 0x264 undefined reference to flatbuffer classiclocale instance home user gitrepo tensorflow tensorflow lite tool make gen rpi armv7l lib libtensorflow lite a if o in function tflite op custom if kernel init tflitecontext char const unsigned int if cc text 0xf8c undefined reference to flatbuffer classiclocale instance home user gitrepo tensorflow tensorflow lite tool make gen rpi armv7l lib libtensorflow lite a mfcc o mfcc cc text 0x118c more undefined reference to flatbuffer classiclocale instance follow collect2 error ld return 1 exit status tensorflow lite tool make makefile 267 recipe for target home user gitrepo tensorflow tensorflow lite tool make gen rpi armv7l bin minimal fail make home user gitrepo tensorflow tensorflow lite tool make gen rpi armv7l bin minimal error 1 make wait for unfinished job
tensorflowtensorflow
inconsistency between name in the conv 1d operation and no support for causal padding
Bug
only tf keras layer conv1d support padding causal and it be very important for time series research however in tf python op conv1d input batch size dim1size dim2size channel in filter filter size channel in channel out also this function behave as expect set a op node accord to the math and predict an output tensor of predictable shape tf keras layer conv1d input batch size step input dim filter size of 1d filter output batch size new step filter these be non usual name for the mathematical definition of convolution be step the size of the time series new step the relation of the filter size with the input dim be filter the channel output size seem incompatible with the former that follow closely the mathematical definition and also expand for multi channel like the 2d case also be desirable to use causal padding with tf python op conv1d without have to add extraneous function by hand to the code thank you pls have your appreciation
tensorflowtensorflow
keras callback documentation for on train batch end vs the actual code of modelcheckpoint
Bug
url s with the issue on train batch end on train batch end description of issue what need change parameter define the parameter section of on train batch end read in part log dict metric result for this batch so the log argument be expect to only contain metric when this method be call this be what I assume when I code my own custom training function this be different than the log argument which be expect by on train batch begin for example which be log dict have key batch and size represent the current batch number and the size of the batch now if we look at the code of modelcheckpoint we see that be have an on batch end method which be call by on train batch end which call the size key of log this be line 950 self sample see since last save log get size 1 so the documentation of on train batch end be not correct as to what be the dict expect by the method and this have an impact when we want to create custom training function that work well with keras callback
tensorflowtensorflow
why do tensorflow occupy different gpu memory for the same inference process on different gpu card
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes my code be similar to example label image cc in c with pb frozen model os platform and distribution e g linux ubuntu 16 04 centos7 3 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device sorry I do not t know tensorflow instal from source or binary source just compile the c api library tensorflow version use command below v1 13 1 python version python2 7 bazel version if compile from source 0 21 0 gcc compiler version if compile from source 4 8 5 cuda cudnn version 10 0 7 5 gpu model and memory p40 24 g 4 v100 24 g 8 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when I run the inference process video action recognition with i3d the input shape 32 224 224 3 on different card I get follow infomation 1 on p40 on device 0 it occupy 4235mib memory the available free memory be still about 6 g some other process have be exist on the same card 2 on p40 on device 1 it occupy 8300mib only the i3d inference model be run on the card 3 on v100 on device 0 it only occupy about 1 g memory only the i3d inference model be run on the card and where be the relative code with allocator in tensorflow and I want to know clearly the gpu memory allocator mechanism than you very much and I set allow growth true in my c inference program and I set cuda visible device to the current device i d at each running thank you describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
can not build the code include stream executor rng h on window
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 x64 tensorflow instal from source or binary source tensorflow version use command below 1 14 rc0 python version 3 6 bazel version if compile from source 0 26 0 describe the current behavior I can not build the code include stream executor rng h on visual studio 2015 as follow tensorflow compiler jit graphcycle graphcycle cc 62 note see reference to class template instantiation tensorflow orderedset be compile error d a 1 s tensorflow tensorflow stream executor build 102 1 c compilation of rule tensorflow stream executor kernel fail exit 2 cl exe fail error execute command cd d a 1 b execroot org tensorflow set include c program file x86 microsoft visual studio 14 0 vc include c program file x86 microsoft visual studio 14 0 vc atlmfc include c program file x86 windows kit 10 include 10 0 17763 0 ucrt c program file x86 window kit netfxsdk 4 6 1 include um c program file x86 windows kit 10 include 10 0 17763 0 share c program file x86 windows kit 10 include 10 0 17763 0 um c program file x86 windows kit 10 include 10 0 17763 0 winrt set path c program file x86 microsoft visual studio 14 0 vc bin amd64 c window microsoft net framework64 v4 0 30319 c program file x86 microsoft visual studio 14 0 common7 ide c program file x86 microsoft visual studio 14 0 common7 tool c program file x86 window kit 10 bin x64 c program file x86 window kit 10 bin x86 c program file x86 microsoft sdks windows v10 0a bin netfx 4 6 1 tool x64 c window system32 set pwd proc self cwd set runfile manifest only 1 set temp c user vssadm 1 appdata local temp set tf download clang 0 set tf need cuda 0 set tf need opencl sycl 0 set tf need rocm 0 set tmp c user vssadm 1 appdata local temp c program file x86 microsoft visual studio 14 0 vc bin amd64 cl exe nologo dcompiler msvc dnominmax d win32 winnt 0x0601 d crt secure no deprecate d crt secure no warning bigobj zm500 ehsc wd4351 wd4291 wd4250 wd4996 I ibazel out x64 window opt bin iexternal com google absl ibazel out x64 window opt bin external com google absl iexternal nsync ibazel out x64 window opt bin external nsync iexternal eigen archive ibazel out x64 window opt bin external eigen archive iexternal local config sycl ibazel out x64 window opt bin external local config sycl iexternal gif archive ibazel out x64 window opt bin external gif archive iexternal jpeg ibazel out x64 window opt bin external jpeg iexternal protobuf archive ibazel out x64 window opt bin external protobuf archive iexternal com googlesource code re2 ibazel out x64 window opt bin external com googlesource code re2 iexternal farmhash archive ibazel out x64 window opt bin external farmhash archive iexternal fft2d ibazel out x64 window opt bin external fft2d iexternal highwayhash ibazel out x64 window opt bin external highwayhash iexternal zlib archive ibazel out x64 window opt bin external zlib archive iexternal double conversion ibazel out x64 window opt bin external double conversion iexternal snappy ibazel out x64 window opt bin external snappy iexternal nsync public ibazel out x64 window opt bin external nsync public iexternal eigen archive ibazel out x64 window opt bin external eigen archive iexternal gif archive lib ibazel out x64 window opt bin external gif archive lib iexternal gif archive window ibazel out x64 window opt bin external gif archive windows iexternal protobuf archive src ibazel out x64 window opt bin external protobuf archive src iexternal farmhash archive src ibazel out x64 window opt bin external farmhash archive src iexternal zlib archive ibazel out x64 window opt bin external zlib archive iexternal double conversion ibazel out x64 window opt bin external double conversion d clang support dyn annotation deigen mpl2 only deigen max align byte 64 deigen have type trait 0 dtf use snappy showinclude md o2 oy dndebug wd4117 d date redact d timestamp redact d time redact gy gw arch avx fobazel out x64 window opt bin tensorflow stream executor objs kernel kernel obj c tensorflow stream executor kernel cc execution platform bazel tool platform host platform tensorflow stream executor rng h 66 error c2589 constant illegal token on right side of tensorflow stream executor rng h 66 error c2059 syntax error tensorflow stream executor rng h 72 error c2589 constant illegal token on right side of tensorflow stream executor rng h 72 error c2059 syntax error target tensorflow compiler aot tfcompile fail to build describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem bazel build c opt color yes config monolithic verbose failure tensorflow compiler aot tfcompile it seem that two odd newline character be mix into rng h l64 as follow virtual bool dopopulaterandgaussian stream stream float mean float stddev devicememory v log error platform s random number generator do not support gaussian return false virtual bool dopopulaterandgaussian stream stream double mean double stddev devicememory v log error platform s random number generator do not support gaussian return false this issue be critical for we would you like to modify the code
tensorflowtensorflow
can not use object load by tf save model load to create keras model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macosx 10 13 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below version 2 0 0 dev20190613 git version v1 12 1 4034 gb81b902c37 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior the keras code example in the documentation for the tf save model load function raise an exception python model tf keras model tf save model save model path import tf save model load path output import input describe the expect behavior I expect it to work as advertise code to reproduce the issue here s the full code I try to add as little as I could python import tensorflow as tf from tensorflow import kera import numpy as np path my save model x train np random rand 100 5 y train np random rand 100 1 model keras model sequential kera layer dense 1 input shape 5 model compile loss mse optimizer sgd model fit x train y train tf save model save model path import tf save model load path input keras layer input shape 5 output import input raise symbolicexception see stacktrace below 61 num output 62 except core notokstatusexception as e typeerror an op outside of the function building code be be pass a graph tensor it be possible to have graph tensor leak out of the function building context by include a tf init scope in your function build code for example the follow function will fail tf function def have init scope my constant tf constant 1 with tf init scope add my constant 2 the graph tensor have name input 1 0 during handling of the above exception another exception occur symbolicexception traceback most recent call last in 16 17 input kera layer input shape 5 18 output import input raise symbolicexception 401 return instance call args kwargs 402 403 miniconda3 envs tf2 lib python3 6 site package tensorflow core python eager def function py in call self args kwd 431 args kwd 432 if we do not create any variable the trace we have be good enough 433 return self concrete stateful fn filter call canon args canon kwd pylint disable protect access 434 435 def fn with cond inner args inner kwd miniconda3 envs tf2 lib python3 6 site package tensorflow core python eager function py in filter call self args kwargs 600 if isinstance t op tensor 601 resource variable op resourcevariable 602 self capture input 603 604 def call flat self args capture input miniconda3 envs tf2 lib python3 6 site package tensorflow core python eager function py in call flat self args capture input 682 only need to override the gradient in graph mode and when we have output 683 if context execute eagerly or not self output 684 output self inference function call ctx args 685 else 686 self register gradient miniconda3 envs tf2 lib python3 6 site package tensorflow core python eager function py in call self ctx args 451 attrs executor type executor type 452 config proto config 453 ctx ctx 454 replace empty list with none 455 output output or none miniconda3 envs tf2 lib python3 6 site package tensorflow core python eager execute py in quick execute op name num output input attrs ctx name 68 except typeerror as e 69 if any op be keras symbolic tensor x for x in input 70 raise core symbolicexception 71 raise e 72 pylint enable protect access symbolicexception other info I can actually call the import object with tensor as long as I pass the training argument python import tf random uniform 10 5 training false however I can not pass it the input tensor import input training false miniconda3 envs tf2 lib python3 6 site package tensorflow core python eager execute py in quick execute op name num output input attrs ctx name 68 except typeerror as e 69 if any op be keras symbolic tensor x for x in input 70 raise core symbolicexception 71 raise e 72 pylint enable protect access symbolicexception
tensorflowtensorflow
tf keras predict fit predict old result
Bug
window 10 python 3 7 tf 1 13 python m model predict input print model fit input output batch size input shape 0 verbose 1 ok loss 16 0302 print model predict input m all 0 synchronization of the weight for the last call predict do not work
tensorflowtensorflow
accuracy discrepancy between eager and graph mode tensorflow alpha 2 0 0
Bug
system information custom and stock code tensorflow instal from binary tf env txt cuda cudnn version nvcc nvidia r cuda compiler driver copyright c 2005 2018 nvidia corporation build on sat aug 25 21 08 04 central daylight time 2018 cuda compilation tool release 10 0 v10 0 130 gpu model and memory name nvidia geforce gtx 1060 with max q design memory 8124 mb describe the current behavior there be a large accuracy difference between run tensorflow in eager versus graph mode I e use tf function and not use tf function for the same model number of batch and epoch on the same datum three different test be run to measure performance on both a gpu and cpu if available each test use tensorflow alpha 2 0 and train use the mnist dataset provide bundle by tensorflow each test run for 1 epoch comprise of 1000 mini batch a mini batch contain 32 sample all test use adam optimizer and sparse categorical loss test description test 1 use the keras compile and fit function test 2 use tf gradienttape and do not use the tf function decorator test 3 use tf gradienttape and the tf function decorator the tf function decorator be apply to the function training on mnist mini batch describe the expect behavior I would expect roughly the same training accuracy after 1000 step for each test when use the tf function decorator a much great accuracy be achieve for the same model type number of epoch number of batch and dataset compare to a training run not use tf function code to reproduce the issue python import tensorflow as tf import tensorflow dataset as tfds from tensorflow import kera from tensorflow python client import device lib import time uncomment this to observe the device use to run tensorflow operation tf debugging set log device placement true description run three different test to measure performance on both a gpu and cpu if available each test use tensorflow alpha 2 0 and train use the mnist dataset provide bundle by tensorflow each test run for 1 epoch comprise of 1000 mini batch a mini batch contain 32 sample all test use adam optimizer and sparse categorical loss test description test 1 use the keras compile and fit function test 2 use tf gradienttape and do not use the tf function decorator test 3 use tf gradienttape and the tf function decorator the tf function decorator be apply to the function training on mnist mini batch issue observe example below for gpu for example test 1 provide a benchmark accuracy of 70 6 4 sec runtime on the training datum after 1000 step test 3 achieve roughly the same accuracy in a similar time even a little well 80 with 3 75 sec runtime test 2 achieve 20 40 accuracy 14 16 sec runtime after train for number of batch the only difference between test 2 and test 3 be no utilization of the tf function decorator in test 2 the slow down in test 2 be expect when not use the tf function decorator however I would expect roughly the same training accuracy after 1000 step for each test create several helper function and variable use for training def print info print tf version print tf execute eagerly print device lib list local device print def get available device name local device device lib list local device name for device in local device name append device name return name def get and prepare datum datum info tfds load mnist with info true as supervise true train datum test datum datum train datum test batch size 32 prep train datum train datum batch batch size prefetch 1 repeat return prep train datum info def create model img shape info feature image shape model keras sequential keras layer flatten input shape img shape kera layer dense 128 activation relu keras layer dropout 0 2 kera layer dense 10 activation softmax return model def train batch model image label loss fn optimizer train loss train accuracy with tf gradienttape as tape pre model image loss loss fn label pre gradient tape gradient loss model trainable variable grad and var zip gradient model trainable variable optimizer apply gradient grad and var train loss loss train accuracy label pre return loss pre def run batch batch train func step per epoch 1000 epoch 1 train loss kera metric mean train accuracy keras metric sparsecategoricalaccuracy for e in range epoch batch 0 for datum label in prep train datum loss pre batch train func datum label train loss train accuracy if batch 500 0 print batch accuracy loss n format batch train accuracy result train loss result if batch step per epoch break batch batch 1 basic info print info avail device name get available device name print avail device name step per epoch 1000 epoch 1 device device cpu 0 device gpu 0 for device in device if device not in avail device name print device not available format device continue device not available print n print print start test for device format device print test record accuracy loss batch number and test type test 1 test use keras compile and fit print begin benchmarke test 1 print test use keras function compile and fit datum prep prep train datum info get and prepare datum t1 start time time with tf device device keras fit model create model keras fit model compile optimizer adam loss sparse categorical crossentropy metric accuracy keras fit model fit prep train datum epoch epoch step per epoch step per epoch verbose 2 t1 end time time print test 1 time elapse n format t1 end t1 start test 2 test use gradient tape without tf function print begin benchmarke test 2 print test use tf gradienttape do not use tf function decorator datum prep prep train datum info get and prepare datum t2 start time time with tf device device eager model create model eager optimizer tf keras optimizer adam eager loss func keras loss sparsecategoricalcrossentropy def eager train batch datum label train loss train accuracy return train batch eager model datum label eager loss func eager optimizer train loss train accuracy run batch eager train batch step per epoch epoch t2 end time time print test 2 time elapse n format t2 end t2 start test 3 test use gradient tape with tf function decorator print begin benchmarke test 3 print test use tf gradienttape use tf function decorator datum prep prep train datum info get and prepare data t3 start time time with tf device device graph model create model graph optimizer tf keras optimizer adam graph loss func keras loss sparsecategoricalcrossentropy tf function def graph train batch datum label train loss train accuracy return train batch graph model datum label graph loss func graph optimizer train loss train accuracy run batch graph train batch step per epoch epoch t3 end time time print test 3 time elapse n format t3 end t3 start other info log perf test tf 2 0 log
tensorflowtensorflow
tf print be not render correctly in the website
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change tf print be not render correctly in the website I understand that it be deprecate in tf2 0 but it should be render correctly in the tf website correct link submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide
tensorflowtensorflow
keyerror parallelinterleavedataset in tf 1 13 mkl gpu
Bug
os platform and distribution linux ubuntu 16 04 tensorflow instal from anaconda dist 3 6 tensorflow version 1 13 mkl 1 13 gpu python version python 3 6 6 anaconda inc describe the current behavior there be an import error on tf train import meta graph while import one of the model from the model zoo coral ready model the import work fine on 1 12 but not 1 13 ver describe the expect behavior the model should be otherwise load with the placeholder and graph code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem the model be take from here model link model website other info log traceback most recent call last file ssd test py line 6 in model importer importmeta dir path model dir quantizer linear file home local sri e32640 document aiquantizer importer py line 89 in init self import graph file home local sri e32640 document aiquantizer importer py line 118 in import graph clear device true file home local sri e32640 anaconda3 envs tf13 gpu lib python3 6 site package tensorflow python training saver py line 1435 in import meta graph meta graph or file clear device import scope kwargs 0 file home local sri e32640 anaconda3 envs tf13 gpu lib python3 6 site package tensorflow python training saver py line 1457 in import meta graph with return element kwargs file home local sri e32640 anaconda3 envs tf13 gpu lib python3 6 site package tensorflow python framework meta graph py line 806 in import scope meta graph with return element return element return element file home local sri e32640 anaconda3 envs tf13 gpu lib python3 6 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file home local sri e32640 anaconda3 envs tf13 gpu lib python3 6 site package tensorflow python framework importer py line 399 in import graph def removedefaultattrs op dict producer op list graph def file home local sri e32640 anaconda3 envs tf13 gpu lib python3 6 site package tensorflow python framework importer py line 159 in removedefaultattrs op def op dict node op keyerror parallelinterleavedataset update 1 the follow code snippet be use for import the model download from the model zoo model dir ssd mobilenet v2 meta glob glob model dir meta 0 ckpt meta replace meta strip graph tf graph with graph as default with tf session as sess reader tf train import meta graph meta clear device true reader restore sess ckpt writer tf summary filewriter logdir model dir graph tf get default graph write to event writer flush vari tf get collection tf graphkey trainable variable for var in vari print var name n
tensorflowtensorflow
mirroredstrategy do not work with cudnnlstm
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 1 13 1 v1 13 1 2 ga5c387b5ed python version 3 6 7 bazel version if compile from source 0 21 0 gcc compiler version if compile from source 7 4 0 cuda cudnn version 10 1 gpu model and memory 1080 ti 11 gb describe the current behavior unable to train a model use mirroredstrategy with cudnnlstm fail with the follow error failedpreconditionerror see above for traceback error while read resource variable cudnn lstm opaque kernel from container localhost this could mean that the variable be uninitialize not find container localhost do not exist could not find resource localhost cudnn lstm opaque kernel node shape 2 identity 1 readvariableop define at home sharvil virtualenv tensorflow lib python3 6 site package tensorflow estimator python estimator estimator py 1254 describe the expect behavior code should run code to reproduce the issue python import tensorflow as tf def gen yield 0 def model fn feature label mode shape 1 100 10 x tf random normal shape y tf zero shape cell tf contrib cudnn rnn cudnnlstm 1 shape 1 prediction cell x loss tf reduce sum tf squared difference prediction y train op tf train adamoptimizer minimize loss tf train get or create global step return tf estimator estimatorspec mode prediction loss train op config tf estimator runconfig training begin successfully with the follow line comment out config config replace train distribute tf distribute mirroredstrategy estimator tf estimator estimator model fn tmp tf bug config config estimator train input fn lambda tf datum dataset from generator gen tf int32
tensorflowtensorflow
session crash when I use tfliteconverter with tf lite opsset tflite builtins int8 option
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colaboratory ubuntu 18 04 2 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary pre instal tensorflow version use command below 2 0 0 beta0 python version python 3 6 7 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version cuda v10 0 130 cudnn 7 6 0 gpu model and memory k80 describe the current behavior I try to build fully quantize autoencoder model use tf2 0 beta and kera on colaboratory but when I run tfliteconverter convert method the jupyter kernel crash and the session be restart describe the expect behavior finish conversion without any error code to reproduce the issue please check colaboratory other info log jun 13 2019 6 47 41 am warn warn root kernel dd551b22 18b1 4c36 917e 0dc4b698c41e restart jun 13 2019 6 47 41 am info kernelrestarter restart kernel 1 5 keep random port jun 13 2019 6 47 38 am warn what map base at jun 13 2019 6 47 38 am warn terminate call after throw an instance of std out of range jun 13 2019 6 47 38 am warn info initialize tensorflow lite runtime
tensorflowtensorflow
polynomial decay document page question
Bug
url s with the issue please provide a link to the documentation entry for example description of issue what need change a clarification of the example and why it actually work clear description in the example mention if the global step be 0 it be unclear how the learning rate will actually change the formula can be rewrite as follow d l e 1 g s p e in the example g 0 which mean the formula become l e 1 e l e e l so I m very unsure of why how the learning rate in this example be actually go to decrease
tensorflowtensorflow
script halt with make initializable iterator with non empty shared name
Bug
system information custom code linux ubuntu 16 04 tensorflow instal from source tensorflow version 1 12 0 python version 2 7 12 bazel version 0 25 1 cuda cudnn version 9 0 176 7 0 gpu model and memory geforce gtx 1080ti describe the current behavior the script doesn t finish I discover that it halt after tf closesession and even keyboardinterrupt can t stop the script I also discover that it exit normally if I pass share name so it look like that this place l465 contain the bug possible deadlock or session can wait for iterator to free resource I also can reproduce the problem with macos without cuda describe the expect behavior I expect this script to finish normally code to reproduce the issue import tensorflow as tf import os os environ cuda visible device def generator for I in range 10 yield 20 def main config tf configproto intra op parallelism thread 1 int op parallelism thread 1 dataset tf datum dataset from generator generator output type tf float32 output shape tf tensorshape 1 train iterator dataset make initializable iterator share name g with tf session config config as session session run train iterator initializer datum producer train iterator get next session run data producer if name main main other info log traceback from gdb 0 syscall at sysdep unix sysv linux x86 64 syscall s 38 1 0x00007fffeafd62b1 in nsync nsync mu semaphore p nsync nsync semaphore s from usr lib python2 7 dist package tensorflow python pywrap tensorflow internal so 2 0x00007fffeafd4028 in nsync nsync mu lock slow nsync nsync mu s nsync waiter unsigned int nsync lock type s from usr lib python2 7 dist package tensorflow python pywrap tensorflow internal so 3 0x00007fffeafd411d in nsync nsync mu lock nsync nsync mu s from usr lib python2 7 dist package tensorflow python pywrap tensorflow internal so 4 0x00007fffe5618998 in tensorflow resourcemgr cleanup std cxx11 basic string std allocator const from usr lib python2 7 dist package tensorflow python libtensorflow framework so 5 0x00007fffe8a96fc0 in from usr lib python2 7 dist package tensorflow python pywrap tensorflow internal so 6 0x00007fffe8a99ec5 in tensorflow data capturedfunction runinstantiate std vector const std vector from usr lib python2 7 dist package tensorflow python pywrap tensorflow internal so 7 0x00007fffe894b13b in tensorflow datum generatordatasetop dataset iterator iterator from usr lib python2 7 dist package tensorflow python pywrap tensorflow internal so 8 0x00007fffe894b211 in tensorflow datum generatordatasetop dataset iterator iterator from usr lib python2 7 dist package tensorflow python pywrap tensorflow internal so 9 0x00007fffe8946ac5 in from usr lib python2 7 dist package tensorflow python pywrap tensorflow internal so 10 0x00007fffe8966571 in tensorflow data iteratorresource iteratorresource from usr lib python2 7 dist package tensorflow python pywrap tensorflow internal so 11 0x00007fffe5615d1c in tensorflow resourcemgr clear from usr lib python2 7 dist package tensorflow python libtensorflow framework so 12 0x00007fffea97826b in tensorflow directsession directsession from usr lib python2 7 dist package tensorflow python pywrap tensorflow internal so 13 0x00007fffea9787a1 in tensorflow directsession directsession from usr lib python2 7 dist package tensorflow python pywrap tensorflow internal so 14 0x00007fffe7903a07 in tensorflow sessionref close from usr lib python2 7 dist package tensorflow python pywrap tensorflow internal so 15 0x00007fffe7ae2f0b in tf closesession from usr lib python2 7 dist package tensorflow python pywrap tensorflow internal so 16 0x00007fffe789fae6 in from usr lib python2 7 dist package tensorflow python pywrap tensorflow internal so 17 0x00000000004bc4aa in call function oparg pp stack 0x7fffffffd260 at python ceval c 4350 18 pyeval evalframeex at python ceval c 2987 19 0x00000000004b9b66 in pyeval evalcodeex at python ceval c 3582 20 0x00000000004c1f56 in fast function nk na n 1 pp stack 0x7fffffffd460 func at python ceval c 4445 21 call function oparg pp stack 0x7fffffffd460 at python ceval c 4370 22 pyeval evalframeex at python ceval c 2987 23 0x00000000004b9b66 in pyeval evalcodeex at python ceval c 3582 24 0x00000000004d5669 in function call lto priv at object funcobject c 523 25 0x00000000004eef5e in pyobject call kw 0x0 arg graph device code location i d value 2 control flow context none output original op none traceback minimal example py 30 generator builtin file minimal example py package none tf name main main os doc none 3 none minimal example py 20 main 13 none usr lib python2 7 dist package tensorflow python data op dataset op py 140 make initializable iterator cachedataset at object abstract c 2546 26 instancemethod call lto priv at object classobject c 2602 27 0x00000000004ae043 in pyobject call kw 0x0 arg none none none func at object abstract c 2546 28 pyobject callfunctionobjargs at object abstract c 2773 29 0x00000000004bed46 in pyeval evalframeex at python ceval c 2948 30 0x00000000004c141f in fast function nk na n 0 pp stack 0x7fffffffdb40 func at python ceval c 4435 31 call function oparg pp stack 0x7fffffffdb40 at python ceval c 4370 32 pyeval evalframeex at python ceval c 2987 33 0x00000000004b9b66 in pyeval evalcodeex at python ceval c 3582 34 0x00000000004eb69f in pyeval evalcode local generator builtin file minimal example py package none tf name main main os doc none global generator builtin file minimal example py package none tf name main main os doc none co 0x7ffff7eecd30 at python ceval c 669 35 run mod lto priv at python pythonrun c 1376 36 0x00000000004e58f2 in pyrun fileexflag at python pythonrun c 1362 37 0x00000000004e41a6 in pyrun simplefileexflag at python pythonrun c 948 38 0x00000000004938ce in py main at module main c 640 39 0x00007ffff760b830 in libc start main main 0x493370 argc 2 argv 0x7fffffffdf88 init fini rtld fini stack end 0x7fffffffdf78 at csu libc start c 291 40 0x0000000000493299 in start look at the info thread we can see that all the thread be wait for something 1 thread 0x7ffff7faf840 lwp 27148 python syscall at sysdep unix sysv linux x86 64 syscall s 38 2 thread 0x7fffa2256700 lwp 27153 python pthread cond wait glibc 2 3 2 at sysdep unix sysv linux x86 64 pthread cond wait s 185 3 thread 0x7fffa1a55700 lwp 27154 python pthread cond wait glibc 2 3 2 at sysdep unix sysv linux x86 64 pthread cond wait s 185 6 thread 0x7fff93fff700 lwp 27157 python pthread cond wait glibc 2 3 2 at sysdep unix sysv linux x86 64 pthread cond wait s 185 7 thread 0x7fffa0a53700 lwp 27158 python pthread cond wait glibc 2 3 2 at sysdep unix sysv linux x86 64 pthread cond wait s 185 8 thread 0x7fffa1254700 lwp 27159 python pthread cond wait glibc 2 3 2 at sysdep unix sysv linux x86 64 pthread cond wait s 185