repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
attributeerror list object have no attribute op when call sparsecategoricalcrossentropy
Bug
describe the current behavior when run the follow code in the documentation of sparsecategoricalcrossentropy cce tf keras loss sparsecategoricalcrossentropy loss cce 0 1 2 9 05 05 5 89 6 05 01 94 print loss loss numpy loss 0 3239 I obtain the follow error attributeerror traceback most recent call last in 3 loss cce 4 0 1 2 5 9 05 05 5 89 6 05 01 94 6 print loss loss numpy loss 0 3239 3 frame usr local lib python3 6 dist package tensorflow core python keras backend py in sparse categorical crossentropy target output from logit axis 4397 if not from logit 4398 if isinstance output op eagertensor variable module variable or 4399 output op type softmax 4400 epsilon constant to tensor epsilon output dtype base dtype 4401 output clip op clip by value output epsilon 1 epsilon attributeerror list object have no attribute op
tensorflowtensorflow
tensorflow documentation should be associate with the applicable specific version of tensorflow
Bug
url s with the issue please provide a link to the documentation entry for example description of issue what need change I be read this documentation page but it be unclear which tensorflow version the documentation apply to especially now that a stable version of tf 2 have be release it be important to clarify which tf version the documentation refer to on the top bar of the link webpage there be a menu call api where you can select the api you be interested in right now the sub menu correspond to the specific version of tf I be read be not even highlight so I don t know if I be read the documentation for tf 1 or 2 to confuse people even far even though I think I be read the documentation for tf 2 in another top bar it be write tf 1 give that I be confused another person can also be confuse so the documentation need to be clarify so I suggest that every documentation page be associate in a very clear way with all applicable tf version
tensorflowtensorflow
typeerror reduce miss 1 require positional argument per replica value
Bug
with mirror strategy scope model tf keras sequential tf keras layer dense 1 input shape 1 optimizer tf keras optimizer adam learning rate 0 001 global batch size 10 mirror strategy num replicas in sync def input fn dataset tf datum dataset from tensor 1 1 repeat 1000 batch global batch size dist dataset mirror strategy experimental distribute dataset dataset return dist dataset dataset input fn tf function def train step dist input def step fn input feature label input with tf gradienttape as tape logit model feature cross entropy tf nn softmax cross entropy with logit logit logit label label loss tf reduce sum cross entropy 1 0 global batch size grad tape gradient loss model trainable variable optimizer apply gradient list zip grad model trainable variable return cross entropy per example loss mirror strategy experimental run v2 step fn args dist input mean loss mirror strategy reduce tf distribute reduceop mean per example loss axis 0 return mean loss with mirror strategy scope for input in dataset train step input its one of the code snippet from the distribute strategy guide for custom training loop I m get this typeerror typeerror reduce miss 1 require positional argument per replica value but then when I run the same code today morning it be work perfectly
tensorflowtensorflow
desynchronize zipped dataset when use tf datum experimental ignore error
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version python 3 7 4 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 0 gpu model and memory p100 16 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when an error be catch use the tf datum experimental ignore error on a zipped dataset only the faulty dataset drop an element the dataset be therefore desynchronize describe the expect behavior dataset should stay synchronize by drop an element from both dataset code to reproduce the issue good dataset tf datum dataset from tensor slice 1 2 0 4 bad dataset good dataset map lambda x tf debugging check numeric 1 x error dataset tf datum dataset zip bad dataset good dataset dataset dataset apply tf datum experimental ignore error for bad good in dataset print float good float bad 1 0 1 0 2 0 0 5 0 0 0 25 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tflite converter convert error on unusual dense layer
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary source tensorflow version use command below git version v1 12 1 15611 g025365a736 version 2 0 0 python version 3 6 8 bazel version if compile from source 0 26 1 gcc compiler version if compile from source gcc 8 3 0 cuda cudnn version no gpu model and memory no describe the current behavior when run the code below conversion error appear that be in the log below code to reproduce the issue import tensorflow as tf class mydense tf keras layers layer def init self num unit kwargs super mydense self init kwargs self num unit num unit def build self input shape kernel shape input shape 1 self num unit 2 self num unit bias shape self num unit self kernel self add weight kernel shape kernel shape trainable true self bias self add weight bias shape bias shape trainable true super mydense self build input shape def call self input return tf einsum ac cde ade input self kernel self bias input tf keras input shape 10 dtype tf float32 output mydense 15 input model tf keras model inputs input output output model summary converter tf lite tfliteconverter from keras model model tflite model converter convert print success log 2019 10 15 14 43 59 548878 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use sse4 1 sse4 2 avx avx2 avx512f fma 2019 10 15 14 43 59 566412 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3100000000 hz 2019 10 15 14 43 59 567598 I tensorflow compiler xla service service cc 168 xla service 0x3502840 initialize for platform host this do not guarantee that xla will be use device 2019 10 15 14 43 59 567616 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version model model layer type output shape param input 1 inputlayer none 10 0 my dense mydense none 30 15 4515 total param 4 515 trainable param 4 515 non trainable param 0 2019 10 15 14 43 59 640959 I tensorflow core grappler device cc 60 number of eligible gpu core count 8 compute capability 0 0 0 note tensorflow be not compile with cuda support 2019 10 15 14 43 59 641037 I tensorflow core grappler cluster single machine cc 356 start new session 2019 10 15 14 43 59 642744 I tensorflow core grappler optimizer meta optimizer cc 829 optimization result for grappler item graph to optimize 2019 10 15 14 43 59 642759 I tensorflow core grappler optimizer meta optimizer cc 831 function optimizer function optimizer do nothing time 0 003ms 2019 10 15 14 43 59 642763 I tensorflow core grappler optimizer meta optimizer cc 831 function optimizer function optimizer do nothing time 0ms 2019 10 15 14 43 59 657830 I tensorflow core grappler device cc 60 number of eligible gpu core count 8 compute capability 0 0 0 note tensorflow be not compile with cuda support 2019 10 15 14 43 59 657896 I tensorflow core grappler cluster single machine cc 356 start new session 2019 10 15 14 43 59 660184 I tensorflow core grappler optimizer meta optimizer cc 829 optimization result for grappler item graph to optimize 2019 10 15 14 43 59 660198 I tensorflow core grappler optimizer meta optimizer cc 831 constant folding graph size after 21 node 3 23 edge 4 time 0 76ms 2019 10 15 14 43 59 660203 I tensorflow core grappler optimizer meta optimizer cc 831 constant folding graph size after 21 node 0 23 edge 0 time 0 248ms traceback most recent call last file convert error py line 25 in tflite model converter convert file workspace local lib python3 6 site package tensorflow core lite python lite py line 447 in convert converter kwargs file workspace local lib python3 6 site package tensorflow core lite python convert py line 449 in toco convert impl enable mlir converter enable mlir converter file workspace local lib python3 6 site package tensorflow core lite python convert py line 200 in toco convert protos raise convertererror see console for info n s n s n stdout stderr tensorflow lite python convert convertererror see console for info 2019 10 15 14 44 00 554744 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 10 operator 19 array 0 quantize 2019 10 15 14 44 00 554825 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 10 operator 19 array 0 quantize 2019 10 15 14 44 00 554930 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 3 operator 8 array 0 quantize 2019 10 15 14 44 00 554945 f tensorflow lite toco graph transformation propagate fix size cc 118 check fail dim x dim y 450 vs 15 dimension must match fatal python error abort current thread 0x00007f4cae2e4740 most recent call first file workspace local lib python3 6 site package tensorflow core lite toco python toco from protos py line 52 in execute file workspace local lib python3 6 site package absl app py line 250 in run main file workspace local lib python3 6 site package absl app py line 299 in run file workspace local lib python3 6 site package tensorflow core python platform app py line 40 in run file workspace local lib python3 6 site package tensorflow core lite toco python toco from protos py line 89 in main file workspace local bin toco from protos line 10 in abort core dump
tensorflowtensorflow
get config miss from additiveattention
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior model contain additiveattention can not be save due to miss get config describe the expect behavior additiveattention have get config and can be save code to reproduce the issue import tensorflow as tf max tokens 6 dimension 2 variable length int sequence query input tf keras input shape none dtype int32 value input tf keras input shape none dtype int32 embed lookup token embed tf keras layer embed max token dimension query embedding of shape batch size tq dimension query embedding token embed query input value embedding of shape batch size tv dimension value embedding token embed query input cnn layer cnn layer tf keras layer conv1d filter 100 kernel size 4 use same padding so output have the same shape as input padding same query encoding of shape batch size tq filter query seq encode cnn layer query embedding value encoding of shape batch size tv filter value seq encode cnn layer value embedding query value attention of shape batch size tq filter query value attention seq tf keras layers additiveattention query seq encode value seq encoding reduce over the sequence axis to produce encoding of shape batch size filter query encode tf keras layers globalaveragepooling1d query seq encode query value attention tf keras layers globalaveragepooling1d query value attention seq concatenate query and document encoding to produce a dnn input layer input layer tf keras layers concatenate query encode query value attention model tf keras model input query input value input output input layer model save test h5 notimplementederror layer with argument in init must override get config other info log code base on
tensorflowtensorflow
import tensorflow inside a function object cause a memory leak
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 osx 10 15 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary pip install tensorflow 1 14 tensorflow version use command below 1 14 0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior when import tensorflow from a function or object the import statement somehow keep a reference to the function and increase it s reference count the full import stacktrace be never free make it impossible for the object and anything reference from that object or function to be free from memory describe the expect behavior it should be possible to free the function call import tensorflow this be not an issue with any other import like import logger code to reproduce the issue import gc class tfimporter def init self name self name name print f tfimporter init self name def get tf self print f import tensorflow self name import tensorflow print tensorflow version version def get other module self print f import log self name import log log info message def del self print f tfimporter delete self name def main importer1 tfimporter 1 importer1 get other module del importer1 print importer1 delete importer2 tfimporter 2 importer2 get tf del importer2 print importer2 delete importer3 tfimporter 3 importer3 get tf del importer3 print importer3 delete print f garbage collection gc collect print f wait for input input main this output user jan miniconda envs foo bin python user jan code tensorflow error py tfimporter init 1 import log 1 tfimporter delete 1 importer1 delete tfimporter init 2 import tensorflow 2 1 14 0 importer2 delete tfimporter init 3 import tensorflow 3 1 14 0 tfimporter delete 3 importer3 delete garbage collection 22 wait for input foo tfimporter delete 2 process finish with exit code 0 so importer2 be only free after the python application finish neither gc collect nor delete the object cause it to be release in python this be not an issue in this toy example but importer2 could have a reference to a large number of other object that take considerable space in memory in reality also this only happen for the first import importer3 can be free without issue other info log tf env txt
tensorflowtensorflow
tensorflow python keras api v1 kera loss have no attribute reduction
Bug
I be use huber loss implementation in tf keras in tensorflow 1 14 0 as follow huber keras loss tf keras loss huber delta delta reduction tf keras loss reduction sum name huber loss I be get the error attributeerror module tensorflow python keras api v1 kera loss have no attribute reduction I have try use tf loss reduction tf compat v2 loss reduction nothing seem to work do tensorflow remove reduction from tf keras loss it be strange if they do so because their documentation still show arg
tensorflowtensorflow
fix documentation for tf batch to space
Bug
the documentation describe the step be merge together and not clear be the link to the source code correct yes submit a pull request yes submit a fix here
tensorflowtensorflow
variable creation fail in tf nightly
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 dev20191012 python version 3 6 describe the current behavior variable creation currently fail with file usr local lib python2 7 dist package tensorflow core python op variable py line 261 in call return cls variable v2 call args kwargs file usr local lib python2 7 dist package tensorflow core python op variable py line 255 in variable v2 call shape shape file usr local lib python2 7 dist package tensorflow core python op variable py line 236 in previous getter lambda kws default variable creator v2 none kws file usr local lib python2 7 dist package tensorflow core python ops variable scope py line 2645 in default variable creator v2 shape shape file usr local lib python2 7 dist package tensorflow core python op variable py line 263 in call return super variablemetaclass cls call args kwargs file usr local lib python2 7 dist package tensorflow core python op resource variable op py line 1410 in init distribute strategy distribute strategy file usr local lib python2 7 dist package tensorflow core python op resource variable op py line 1556 in init from args graph mode self in graph mode file usr local lib python2 7 dist package tensorflow core python op resource variable op py line 231 in eager safe variable handle shape dtype share name name graph mode initial value file usr local lib python2 7 dist package tensorflow core python op resource variable op py line 167 in variable handle from shape and dtype handle data shape and type append attributeerror google protobuf pyext message repeatedcompositeco object have no attribute append code to reproduce the issue from example create a variable import tensorflow as tf var tf variable tf zero 1 2 3 other info log I believe this be occur after aa25ad70c021968fb3a4a93ee814ca2fa699b32b cc mrry
tensorflowtensorflow
update build instruction for docker 19 03
Bug
description of issue what need change docker build from source documentation seem to be out of date for docker 19 03 for example there be no runtime nvidia flag available any more
tensorflowtensorflow
keras optimizer apply gradient arg grad and var have wrong type in documentation
Bug
url s with the issue l414 description of issue what need change grad and var be document as list but be actually pass as zip object clear description this can be problematic when write custom optimizer that iterate over the grad and var multiple time as in that case a zip object will not give the intend behaviour
tensorflowtensorflow
significant prediction slowdown after model compile
Bug
system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary pip install tensorflow tensorflow version 2 0 0 python version 3 7 cuda cudnn version cuda 10 0 cudnn 7 6 4 gpu model and memory gtx 1060 6 gb describe the current behavior the prediction speed be slow down a lot after model compile call describe the expect behavior speed should not be affect predict function be use by user assume that it will work fast because we use it all the time in production it should not cause surprise to user code to reproduce the issue image
tensorflowtensorflow
potential error in codelab learn tensorflow 2 computer vision
Bug
so I be follow this codelab 4 on slide 5 the optimizer be set to be tf train adamoptimizer and it return an error of module tensorflow core api v2 train have no attribute adamoptimizer I guess it should be tf optimizer adam
tensorflowtensorflow
tf 2 0 api doc tf gradienttape
Bug
url s with the issue description of issue what need change clear description class member and return value in gradient function not consistent with actual value in code correct link yes parameter define yes return define need to be modify gradient 1 argument unconnected gradient a value which can either hold none or zero should be none or zero 2 return value in addition to the mention return value if none of the provide element in source argument be be watch the function will return none raise list and define yes usage example yes submit a pull request yes
tensorflowtensorflow
valueerror when use adamoptimizer
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 python version 3 7 cuda cudnn version cuda10 0 and cndnn7 6 describe the current behavior I instantiate the adam in two way one work but the another report the valueerror describe the expect behavior there should be no difference between two way to instantiate the optimizer code to reproduce the issue python from tensorflow import kera from tensorflow python keras import layer optimizer import tensorflow as tf model inp layer input none none 3 x layer conv2d 32 3 padding same inp x layer conv2d 3 3 padding same inp model keras model inp x compile opt keras optimizer adam this method work opt optimizer adam this method doesn t work model compile optimizer opt loss mse datum x tf one 16 48 48 3 y tf zero 16 48 48 3 train model fit x y batch size 1 other info log traceback most recent call last file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python framework op def library py line 527 in apply op helper prefer dtype default dtype file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python framework op py line 1296 in internal convert to tensor ret conversion func value dtype dtype name name as ref as ref file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python framework constant op py line 286 in constant tensor conversion function return constant v dtype dtype name name file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python framework constant op py line 227 in constant allow broadcast true file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python framework constant op py line 265 in constant impl allow broadcast allow broadcast file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python framework tensor util py line 437 in make tensor proto raise valueerror none value not support valueerror none value not support during handling of the above exception another exception occur traceback most recent call last file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python framework op def library py line 541 in apply op helper value as ref input arg be ref dtype name file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python framework op py line 1296 in internal convert to tensor ret conversion func value dtype dtype name name as ref as ref file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python framework constant op py line 286 in constant tensor conversion function return constant v dtype dtype name name file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python framework constant op py line 227 in constant allow broadcast true file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python framework constant op py line 265 in constant impl allow broadcast allow broadcast file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python framework tensor util py line 437 in make tensor proto raise valueerror none value not support valueerror none value not support during handling of the above exception another exception occur traceback most recent call last file demo py line 21 in model fit x y batch size 1 file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python keras engine training py line 728 in fit use multiprocesse use multiprocesse file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python keras engine training array py line 674 in fit step name step per epoch file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python keras engine training array py line 189 in model iteration f make execution function model mode file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python keras engine training array py line 565 in make execution function return model make execution function mode file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python keras engine training py line 2184 in make execution function self make train function file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python keras engine training py line 2116 in make train function param self collect trainable weight loss self total loss file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python keras optimizers py line 476 in get update grad self get gradient loss param file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python keras optimizers py line 92 in get gradient if none in grad file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python ops math op py line 1336 in tensor equal return gen math op equal self other file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python ops gen math op py line 3627 in equal name name file home weitong anaconda3 envs tf2 0 lib python3 7 site package tensorflow core python framework op def library py line 545 in apply op helper input name err valueerror try to convert y to a tensor and fail error none value not support
tensorflowtensorflow
no opkernel be register to support op tpureplicatemetadata use by node tpureplicatemetadata
Bug
when I run the follow ipynb I get no opkernel be register to support op tpureplicatemetadata use by node tpureplicatemetadata define at usr local lib python3 6 dist package tensorflow core python framework op py 1748 with these attrs num core per replica 1 use tpu true num replicas 8 computation shape host compute core device assignment tpu replicate cluster padding map topology step marker location step mark at entry allow soft placement false register device cpu xla cpu
tensorflowtensorflow
tflite coverter error tensorflow lite toco tooling util cc 935
Bug
I m try to convert a tensorflow model to tflite model from a frozen graph I get an error say I should make a bug report to tflite team you can download the graph from here any help would be much appreciated system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 window 7 tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 7 describe the current behavior tensorflow lite toco tooling util cc 935 check fail getopwithoutput model output array specify output array sample sequence while exit 3 be not produce by any op in this graph be it a typo this should not happen if you trigger this error please send a bug report with code to reporduce this error to the tensorflow lite team abort core dump describe the expect behavior code to reproduce the issue import tensorflow as tf import sys from tensorflow python platform import gfile from tensorflow core protobuf import save model pb2 from tensorflow python util import compat graph def file frozen 355 pb input array sample sequence model shape output array sample sequence while exit 3 converter tf compat v1 lite tfliteconverter from frozen graph graph def file input array output array tflite model converter convert open convert model 0 tflite wb write tflite model other info log convertererror traceback most recent call last in 4 5 converter tf compat v1 lite tfliteconverter from frozen graph graph def file input array output array 6 tflite model converter convert 7 open content drive my drive convert model 0 tflite wb write tflite model 2 frame usr local lib python3 6 dist package tensorflow core lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 198 stdout try convert to unicode stdout 199 stderr try convert to unicode stderr 200 raise convertererror see console for info n s n s n stdout stderr 201 finally 202 must manually cleanup file convertererror see console for info 2019 10 13 11 18 54 608948 f tensorflow lite toco tooling util cc 935 check fail getopwithoutput model output array specify output array sample sequence while exit 3 be not produce by any op in this graph be it a typo this should not happen if you trigger this error please send a bug report with code to reporduce this error to the tensorflow lite team abort core dump
tensorflowtensorflow
surprising random seed behavior when use tf function
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macosx 10 14 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 rc2 26 g64c3d382ca 2 0 0 python version 3 7 4 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior random seed work in surprising way in tf 2 0 when use tf function in particular the value of the global random seed be only take into account when a function be trace not when it be call this be surprising and different from tf 1 x behavior describe the expect behavior I expect the value of the global random seed to be take into account every time a pseudo random number be generate code to reproduce the issue python tf function def rnd return tf random uniform shape tf random set seed 42 print rnd the rnd function s seed be generate randomly now base on print rnd the current random seed which be 42 print tf random set seed 43 reset the random sequence but ignore this seed print rnd print rnd print tf random set seed 42 reset the random sequence but ignore this seed print rnd print rnd print the output value be tf tensor 0 63789964 shape dtype float32 tf tensor 0 8774011 shape dtype float32 tf tensor 0 63789964 shape dtype float32 tf tensor 0 8774011 shape dtype float32 tf tensor 0 63789964 shape dtype float32 tf tensor 0 8774011 shape dtype float32 notice that we get the same sequence of random number every time ignore the value of the global random seed the only value that matter be the first one when the function get trace more code and example of surprising behavior in this colab other info log specifically I would expect the output to look the same as when the function be not decorate with tf function tf tensor 0 6645621 shape dtype float32 tf tensor 0 68789124 shape dtype float32 tf tensor 0 2733041 shape dtype float32 tf tensor 0 5168259 shape dtype float32 tf tensor 0 6645621 shape dtype float32 tf tensor 0 68789124 shape dtype float32 note that the second sequence be different as expect in fact the pseudo random number should be identical whether the function be decorate or not but that s a nice to have
tensorflowtensorflow
can not import tensorflow train
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macosx 10 14 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 rc2 26 g64c3d382ca 2 0 0 python version 3 7 4 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior import tensorflow train raise a modulenotfounderror no module name tensorflow train describe the expect behavior no error note this use to work fine in tf nightly 2 0 preview code to reproduce the issue python import tensorflow train other info log traceback modulenotfounderror traceback most recent call last in 1 import tensorflow train modulenotfounderror no module name tensorflow train workaround instead of python from tensorflow train import byteslist use python import tensorflow as tf byteslist tf train byteslist
tensorflowtensorflow
pickle show error work with tensorflow2 0 while it work with tensorflow 1 14
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian gnu linux 9 11 stretch mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary via pip tensorflow version use command below 2 0 python version python 3 7 3 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behaviour currently if I pickle in 2 0 it give error typeerror can t pickle thread local object whereas in tensorflow version 1 14 it work fine describe the expect behavior tp txt pickling should work in in tf 2 0 code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem in the attached file run get model method for tf2 0 and get model prev to run for tf1 14 you will see that pickle model be generate for 1 14 but not for 2 0 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
can t save a model with a timedistribute layer wrap another model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution linux ubuntu 18 04 2 tensorflow instal from source or binary binary tensorflow version 2 0 0 python version 3 7 3 cuda cudnn version cuda 10 1 cudnn 7 5 1 gpu model and memory titan x describe the current behavior I get a valueerror when try to save a tf keras model with a tf keras layer timedistribute layer wrap another tf keras model that have convolutional layer I be use tf keras model save with the default save format savedmodel see below for example there be no error when save with save format h5 describe the expect behavior successfully save a savedmodel with the tf keras layer timedistribute layer code to reproduce the issue 1 wrap a 1 layer convolutional nn with tf keras layer timedistribute python import tensorflow as tf input shape 100 100 3 embed model tf keras sequential tf keras layers input input shape tf keras layer conv2d filter 32 kernel size 3 stride 1 input sequence tf keras layers input none input shape sequence embed tf keras layer timedistribute embed model output sequence embed input sequence model tf keras model input input sequence output output model save model1 error valueerror input 0 of layer conv2d be incompatible with the layer expect ndim 4 find ndim 5 full shape receive none none 100 100 3 2 wrap a pre train tf keras application model close to my actual use case python import tensorflow as tf input shape 224 224 3 mobilenet tf keras application mobilenet input shape input shape include top false weight imagenet pooling avg input sequence tf keras layers input none input shape sequence embed tf keras layer timedistribute mobilenet output sequence embed input sequence model tf keras model input input sequence output output model save model2 error valueerror input 0 of layer conv1 pad be incompatible with the layer expect ndim 4 find ndim 5 full shape receive none none 224 224 3 3 saving as an hdf5 file instead python import tensorflow as tf input shape 224 224 3 mobilenet tf keras application mobilenet input shape input shape include top false weight imagenet pooling avg input sequence tf keras layers input none input shape sequence embed tf keras layer timedistribute mobilenet output sequence embed input sequence model tf keras model input input sequence output output model save model3 h5 save format h5 this work without error 4 save to the savedmodel format work with just dense layer python import tensorflow as tf input shape 100 embed model tf keras sequential tf keras layers input input shape tf keras layer dense unit 10 input sequence tf keras layers input none input shape sequence embed tf keras layer timedistribute embed model output sequence embed input sequence model tf keras model input input sequence output output model save model4 this work without error
tensorflowtensorflow
keras do not verify support masking
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 os x mojave mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pip install tensorflow version use command below v2 0 0 rc2 26 g64c3d382ca 2 0 0 python version 3 7 4 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior I add an embed layer with mask zero true then I add another layer either flatten or globalavgpool1d or globalmaxpool1d of these only globalavgpool1d support masking but when I compile and fit the model no error be raise I believe in the past prior to tf 2 0 an error would be raise describe the expect behavior an error should be raise when use a layer that doesn t support mask on top of a layer that perform mask code to reproduce the issue code for repro when set arch to max pool or flatten the train and evaluate model hp function should raise an error but instead no error be raise
tensorflowtensorflow
keras layers lstm do not work with model evaluate after train
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian 9 11 gce deeplearne image mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 7 3 anaconda3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version v10 0 130 7 6 4 38 gpu model and memory v100 16 gb describe the current behavior model evaluate raise this error after train traceback most recent call last file lstm py line 76 in model evaluate leave right label file home zhaohaozeng anaconda3 lib python3 7 site package tensorflow core python keras engine training py line 833 in evaluate use multiprocesse use multiprocesse file home zhaohaozeng anaconda3 lib python3 7 site package tensorflow core python keras engine training v2 py line 456 in evaluate import type sample weight sample weight step step callback callback kwargs file home zhaohaozeng anaconda3 lib python3 7 site package tensorflow core python keras engine training v2 py line 444 in model iteration total epoch 1 file home zhaohaozeng anaconda3 lib python3 7 site package tensorflow core python keras engine training v2 py line 123 in run one epoch batch out execution function iterator file home zhaohaozeng anaconda3 lib python3 7 site package tensorflow core python keras engine training v2 util py line 86 in execution function distribute function input fn file home zhaohaozeng anaconda3 lib python3 7 site package tensorflow core python eager def function py line 457 in call result self call args kwd file home zhaohaozeng anaconda3 lib python3 7 site package tensorflow core python eager def function py line 526 in call return self concrete stateful fn filter call canon args canon kwd pylint disable protect access file home zhaohaozeng anaconda3 lib python3 7 site package tensorflow core python eager function py line 1141 in filter call self capture input file home zhaohaozeng anaconda3 lib python3 7 site package tensorflow core python eager function py line 1224 in call flat ctx args cancellation manager cancellation manager file home zhaohaozeng anaconda3 lib python3 7 site package tensorflow core python eager function py line 511 in call ctx ctx file home zhaohaozeng anaconda3 lib python3 7 site package tensorflow core python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl notfounderror 2 root error s find 0 not find resource anonymousiterator anonymousiterator1 n10tensorflow4data16iteratorresourcee do not exist node iteratorgetnext define at home zhaohaozeng anaconda3 lib python3 7 site package tensorflow core python framework op py 1751 iteratorgetnext 45 1 not find resource anonymousiterator anonymousiterator1 n10tensorflow4data16iteratorresourcee do not exist node iteratorgetnext define at home zhaohaozeng anaconda3 lib python3 7 site package tensorflow core python framework op py 1751 0 successful operation 0 derive error ignore op inference distribute function 11390 function call stack distribute function distribute function describe the expect behavior model evaluate should not raise this error after train code to reproduce the issue code and log I test with this machine on gce n1 standard 4 1 x nvidia tesla v100 image c1 deeplearne common cu100 20191003 google deep learn vm common dl gpu instal anaconda3 cudnn 7 6 4 38 get some error with the original 7 4 cudnn in the image tensorflow gpu 2 0 other info log the problem may not happen with cpu
tensorflowtensorflow
save gru with dropout to savedmodel fail
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version python 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior when model contain gru layer dropout be set and activation relu the model be not savable error attempt to save a function b inference gru layer call fn 8041 which reference a symbolic tensor tensor dropout mul 1 0 shape none 3 dtype float32 that be not a simple constant this be not support describe the expect behavior model get save code to reproduce the issue from tensorflow import kera from tensorflow keras import layer input keras input shape 784 3 name digit x layer gru 64 activation relu name gru dropout 0 1 input x layer dense 64 activation relu name dense x output layer dense 10 activation softmax name prediction x model keras model inputs input output output name 3 layer model summary model save model save format tf base on
tensorflowtensorflow
can not use dict base dataset with keras model fit
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 6 4 cuda cudnn version 10 1 gpu model and memory 7 6 2 in order to make both dataset and keras model have good structure I create a dataset and a vanilla model like this python dataset be something like subscale model be something like class vanillamodel model def init self num unit kwargs super vanillamodel self init kwargs self num unit num unit one linear projection layer self dense proj tf keras layer dense num unit activation relu def call self feature forward pass output self dense proj feature input return output output when I use the dict base dataset from tfds with keras model fit the first call will cause expection as python compile model use dict with same key model compile adam output mse the error file usr local lib python3 6 dist package tensorflow core python keras engine training util py line 1248 in cast if float dtype and mismatch if target dtype out dtype attributeerror str object have no attribute dtype I check the code and find that when dict be pass iterate through zip target output will just get the key of the dict so the string key have no dtyle l1246 image so how can I use dict base dataset and model with keras model fit
tensorflowtensorflow
model have not be build yet
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 tensorflow instal from source or binary tensorflow version use command below 2 0 0 beta1 python version 3 7 3 bazel version if compile from source gcc compiler version if compile from source gcc 7 3 0 anaconda inc on linux cuda cudnn version gpu model and memory vga compatible controller nvidia corporation gp102 geforce gtx 1080 ti rev a1 I have download the official example of convolutional variational autoencoder from this link if one start to execute this file everything look fine but the problem arise when one want to save the weight or call the summary method I e model summary this produce the follow error valueerror this model have not yet be build build the model first by call build or call fit with some datum or specify an input shape argument in the first layer s for automatic build ps I face the same issue in my costume code as well since the file be large I prevent to provide it here but you can find it in this link any idea what s go on and how to overcome this challenge thank in advance tensorflow
tensorflowtensorflow
ctc beam search decoder in tensorflow android
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change how to decode output float array into string use ctc beam search decoder in android I m not use tflite as of now clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define float array be return value define yes raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide
tensorflowtensorflow
tf 2 0 can t authenticate with google storage in colab
Bug
I m try to use the tpu in colab so I have to authenticate to my google storage account to feed the datum as I understand from past tutorial on use tpus on colab when I m try to authenticate I get the follow error attributeerror traceback most recent call last in 1 from google colab import auth 2 auth authenticate user usr local lib python3 6 dist package google colab auth py in authenticate user clear output 154 with tf compat v1 session grpc format colab tpu addr as sess 155 with open get adc path as auth info 156 tf contrib cloud configure gcs 157 sess credential json load auth info 158 if check adc attributeerror module tensorflow have no attribute contrib the code I run be pip install tensorflow gpu from google colab import auth auth authenticate user the follow link contain the code to reproduce the error
tensorflowtensorflow
tf 2 0 function and package autocomplete break in pycharm
Bug
from tensorflow kera import callback from tensorflow keras optimizer import adam the package in bold be just a few example that be not recognize in pycharm the issue be first spot and report in rc release at that time we ve be tell to wait for the release of 2 0 because the issue be a bit more complex the 2 0 be now here and as far as I be concern the issue still persist there be any configuration to be make in order to work or this problem be still an unresolved issue pycharm give the follow complaint can not find reference kera in init py
tensorflowtensorflow
error use tensorboard callback with histogram freq 0
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary instal with pip tensorflow version use command below 1 14 0 python version python 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 1 243 426 00 win10 cudnn 10 1 windows10 x64 v7 6 3 30 gpu model and memory geforce gtx 1060 6 gb dedicated describe the current behavior the code in the section bellow result in an error on this line l385 the test function do not have fetch callback define the error I get be python tensorflow gpu lib site package tensorflow python keras callbacks v1 py line 386 in on epoch begin self merge self fetch callback attributeerror function object have no attribute fetch callback describe the expect behavior there should be no error the code work only when histogram freq 0 code to reproduce the issue python import numpy as np import tensorflow as tf from keras layers import input dense from keras model import model from keras optimizers import sgd num feature 100 train x np random rand 40 num feature train y np random randint 2 size 40 the input layer input layer input shape num feature name input output dense 10 activation sigmoid name hide 1 input layer output dense 1 activation sigmoid name output output model model input input layer output output sgd sgd lr 0 01 decay 1e 4 momentum 0 9 nesterov true model compile loss binary crossentropy optimizer sgd metric accuracy tensorboard callback tf keras callbacks tensorboard log dir os path join out dir datetime now strftime y m d h m s histogram freq 2 write graph true write image true my callback tensorboard callback model fit x train x y train y validation split 2 epoch 5 callback my callback other info log model summary layer type output shape param input inputlayer none 100 0 hide 1 dense none 10 1010 output dense none 1 11 total param 1 021 trainable param 1 021 non trainable param 0
tensorflowtensorflow
tf 1 0 recursionerror maximum recursion depth exceed
Bug
system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 2 lts docker tensorflow instal from source or binary binary conda tensorflow version use command below unknown 1 14 0 python version python 3 7 3 cuda cudnn version cuda 10 0 cudnn 7 4 1 5 1 gpu model and memory quadro rtx 6000 major 7 minor 5 memoryclockrate ghz 1 77 describe the problem the code below produce a recursionerror in tf 1 0 presumably because of the large dataset the error do not occur for much small value in the n file variable the error do also not occur in tf 2 0 describe the expect behavior no error work model fit code to reproduce the issue import numpy as np import tensorflow as tf tf 2 0 gpu tf config experimental list physical device gpu for gpu in gpu tf config experimental set memory growth gpu true tf debugging set log device placement true tf 1 0 tf compat v1 enable eager execution config tf compat v1 configproto config gpu option allow growth true config log device placement true sess tf compat v1 session config config tf compat v1 keras backend set session sess assert tf execute eagerly batch size 256 num tstep 144 num feature 130 num unit 88 n file 3320 n file 10 num epoch 1000 seq len max trunc batch size num tstep flen 3728 x np random rand flen 1 num feature 2 n label0 int flen 1 0 2 y np concatenate np zero n label0 1 label 0 np one flen n label0 1 1 label 1 axis 0 ds out tf datum dataset from tensor slice x y ds ser ds out map lambda x tf reshape tf py function lambda v tf train example feature tf train feature feature feature tf train feature float list tf train floatlist value v 0 numpy label tf train feature float list tf train floatlist value v 1 numpy serializetostre x tf string num parallel call tf data experimental autotune writer tf datum experimental tfrecordwriter temp tfrecord writer write ds ser file temp tfrecord n file model tf keras sequential tf keras layers inputlayer input shape num tstep num feature batch size batch size tf keras layer mask mask value 0 0 input shape num tstep num feature tf keras layers lstm num unit batch input shape batch size num tstep num feature return sequence true stateful false tf keras layer timedistribute tf keras layer dense 1 tf keras layers activation sigmoid model compile optimizer adam loss binary crossentropy metric accuracy def prep ds file file ds tf datum tfrecorddataset file ds ds map lambda x tf io parse single example x feature tf io fixedlenfeature 132 tf float32 label tf io fixedlenfeature 1 tf float32 num parallel call tf data experimental autotune ds ds flat map lambda v tf datum dataset from tensor v feature 2 v label trunc min seq len max trunc flen 1 num tstep num tstep ds ds take trunc c pad batch size flen 1 num tstep num tstep if c pad 0 assert c pad flen 1 num tstep num tstep seq len max trunc ds pad tf datum dataset from tensor tf constant 0 0 shape num feature tf constant 0 0 shape 1 ds pad ds pad repeat c pad ds ds concatenate ds pad pad to correct size ds ds window size num tstep shift none stride 1 drop remainder true ds ds flat map lambda x y tf datum dataset zip x batch num tstep y batch num tstep ds ds batch batch size drop remainder true return ds ds fs tf datum dataset list file file shuffle true seed 1 fs train ds fs take int n file 0 7 fs val ds fs skip int n file 0 7 take int n file 0 1 ds train prep ds file f for f in fs train take 1 0 for f in fs train skip 1 ds train ds train concatenate prep ds file f ds train ds train prefetch buffer size tf datum experimental autotune ds val prep ds file f for f in fs val take 1 0 for f in fs val skip 1 ds val ds val concatenate prep ds file f ds val ds val prefetch buffer size tf datum experimental autotune cbs tf keras callback earlystopping monitor val loss patience 10 restore good weight true model fit ds train epoch num epoch verbose 1 shuffle false validation datum ds val validation step none callback cbs other info log warn log before flag parsing go to stderr w1010 10 35 45 397222 140253093480256 deprecation py 323 from ws miniconda3 lib python3 7 site package tensorflow python data util random seed py 58 add dispatch support wrapper from tensorflow python op array op be deprecate and will be remove in a future version instruction for update use tf where in 2 0 which have the same broadcast rule as np where recursionerrortraceback most recent call last in 109 110 model fit ds train epoch num epoch verbose 1 shuffle false 111 validation datum ds val validation step none callback cbs ws miniconda3 lib python3 7 site package tensorflow python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 692 worker 0 693 shuffle shuffle 694 initial epoch initial epoch 695 696 case 3 symbolic tensor or numpy array like ws miniconda3 lib python3 7 site package tensorflow python keras engine training py in fit generator self generator step per epoch epoch verbose callback validation datum validation step validation freq class weight max queue size worker use multiprocesse shuffle initial epoch 1431 shuffle shuffle 1432 initial epoch initial epoch 1433 step name step per epoch 1434 1435 def evaluate generator self ws miniconda3 lib python3 7 site package tensorflow python keras engine training generator py in model iteration model datum step per epoch epoch verbose callback validation datum validation step validation freq class weight max queue size worker use multiprocesse shuffle initial epoch mode batch size step name kwargs 142 batch size batch size 143 epoch epoch initial epoch 144 shuffle shuffle 145 146 do validation validation datum be not none ws miniconda3 lib python3 7 site package tensorflow python keras engine training generator py in convert to generator like datum batch size step per epoch epoch shuffle 475 return datum step per epoch 476 if isinstance data dataset op datasetv2 477 return dataset op make one shot iterator datum step per epoch 478 479 create generator from numpy or eagertensor input ws miniconda3 lib python3 7 site package tensorflow python data op dataset op py in make one shot iterator dataset 1940 call the define make one shot iterator if there be one because some 1941 dataset e g for prefetche override its behavior 1942 return dataset make one shot iterator pylint disable protect access 1943 except attributeerror 1944 return datasetv1adapter dataset make one shot iterator pylint disable protect access ws miniconda3 lib python3 7 site package tensorflow python data op dataset op py in make one shot iterator self 1525 def make one shot iterator self pylint disable miss docstre 1526 if context execute eagerly 1527 return iterator op iteratorv2 self 1528 1529 ensure same dataset graph self ws miniconda3 lib python3 7 site package tensorflow python data op iterator op py in init self dataset 562 with op device cpu 0 563 pylint disable protect access 564 dataset dataset apply option 565 ds variant dataset variant tensor 566 self structure dataset element structure ws miniconda3 lib python3 7 site package tensorflow python data op dataset op py in apply option self 230 231 dataset self 232 option self option 233 if option experimental threading be not none 234 t option option experimental threading ws miniconda3 lib python3 7 site package tensorflow python data op dataset op py in option self 1888 1889 def option self 1890 return self dataset option 1891 1892 property ws miniconda3 lib python3 7 site package tensorflow python data op dataset op py in option self 221 option option 222 for input dataset in self input 223 input option input dataset option 224 if input option be not none 225 option option merge input option last 2 frame repeat from the frame below ws miniconda3 lib python3 7 site package tensorflow python data op dataset op py in option self 1888 1889 def option self 1890 return self dataset option 1891 1892 property recursionerror maximum recursion depth exceed
tensorflowtensorflow
tensorflow 2 0 feature column and input layer
Bug
documentation for tf feature column like for example use input layer which be not available in v2 column embed column state 3 feature tf io parse example feature make parse example spec column dense tensor input layer feature column
tensorflowtensorflow
autograph return an empty array when I use a for loop and tensorarray
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 14 6 tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 6 x bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory hello here be a sniped of code python a np array 1 2 3 np int32 tf function work without the decorator def foo a b tf tensorarray tf string 4 b write 0 test for I in tf range 3 if a I 2 b write I fuzz elif a i 3 b write I buzz return b stack print foo a with the decorator the autograph return an empty array tf tensor b b b b shape 4 dtype string however it shoud return tf tensor b test b fuzz b buzz b shape 4 dtype string
tensorflowtensorflow
incorrect result when subtract 1 from exponential of variable
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow custom code os platform and distribution e g linux ubuntu 16 04 macos 10 14 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v1 14 0 rc1 22 gaf24dc91b5 1 14 0 python version 3 7 3 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory intel iris plus graphic 640 1536 mb describe the current behavior with the follow setup session tf session variable tf variable 4j exp tf exp variable session run tf global variable initializer the follow code print session run exp 1 print session run exp 1 produce two different output specifically 2 897081749192498 1 3258713481172573j 1 6536436208636118 0 7568024953079282j describe the expect behavior that code should print identical array the latter be correct the former be not code to reproduce the issue session tf session variable tf variable 4j exp tf exp variable session run tf global variable initializer print session run exp 1 print session run exp 1 also see colab notebook here other info log I have no idea what s happen but some observation there need to be a variable involve if I use a constant instead of a variable all work as expect the exponent value be important if I change the 4 s to 1 s then all work as expect for example the tf exponential be important if instead I calculate the exponential with numpy and then set the variable to that exponential subtract 1 work correctly specifically subtract 1 seem important if instead I subtract 2 or 0 1 or 1j all work as expect
tensorflowtensorflow
tf trt batchsize 0 batchsize max batch size
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 2 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory t4 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior see the follow error 2019 10 09 22 38 11 629630 I tensorflow compiler tf2tensorrt kernel trt engine op cc 632 build a new tensorrt engine for xxx input shape 0 28 28 128 2019 10 09 22 38 11 629841 e tensorflow compiler tf2tensorrt util trt logg cc 41 defaultlogger parameter check fail at builder builder cpp setmaxbatchsize 113 condition batchsize 0 batchsize max batch size describe the expect behavior ideally would not like to see the error as currently see the error more information along the follow line will help a lot determine the root cause 1 see more detailed log on what the offend layer and tensor be 2 what exactly be the value that be the issue in this case batchsize 3 where in code this be occur do not see any file call builder in tf 1 14 source if error in happen elsewhere in third party code in this case may be in nvidia tensorrt close source make that explicit 4 in this case where be it get 0 for batch size code to reproduce the issue it be not possible to provide code for every issue well log be a much more efficient way to root cause and resolve issue other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf compat v1 keras backend get session give erroneous deprecation warn
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior access tf compat v1 kera backend get session give a deprecation warning tell you to use tf compat v1 kera backend get session describe the expect behavior access tf compat v1 kera backend get session should not give a deprecation warn since we re explicitly access the compat v1 namespace or it should direct the user to the correct function if tf compat v1 kera backend get session be not the right access point rather than tell the user to use the namespace they re already use code to reproduce the issue python import tensorflow as tf tf compat v1 kera backend get session this result in the warning the name tf keras backend get session be deprecate please use tf compat v1 kera backend get session instead note that we re access tf compat v1 kera backend get session not tf keras backend get session
tensorflowtensorflow
list all concrete function for serialization double the trace count when call
Bug
I m see the new warning trigger tf function retracing tracing be expensive and the excessive number of tracing and be dig into it when I print everything that go through the offend function I only see two tensor by type shape this lead I to notice the follow weird event that bump double the trace count be there some reason for this if this be normal I will try a minimal reproduction of the warning message that seem to not make sense import numpy as np import tensorflow as tf tf function def f a tf print a shape return a f tf convert to tensor np random randn 4 3 astype np float32 print f get trace count f tf convert to tensor np random randn 4 3 astype np float32 print f get trace count f tf convert to tensor np random randn 4 3 astype np float32 print f get trace count f tf convert to tensor np random randn 4 2 astype np float32 print f get trace count f tf convert to tensor np random randn 4 1 astype np float32 print f get trace count print f list all concrete function for serialization print f get trace count result in tensorshape 4 3 1 tensorshape 4 3 1 tensorshape 4 3 1 tensorshape 4 2 2 tensorshape 4 1 3 6 system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
efficiency of model fit generator be greatly reduce in 2 0 0
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 2 0 0 python version 3 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 7 6 gpu model and memory titan xp 12 g you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior my code take 21 epoch in tf 2 0 0alpha while it take 76s epoch in tf 2 0 0 I find that the problem be model fit generator when I replace it with model fit the efficiency of the code be exactly the same in both version but I need to use imagedatagenerator describe the expect behavior the efficiency of the code should be same code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem train image train label test image test label load cifar home user document dataset cifar 10 datagen imagedatagenerator horizontal flip true width shift range 0 125 height shift range 0 125 fill mode constant cval 0 datagen fit train image datagenflow datagen flow train image train label batch size batch size model fit generator datagenflow step per epoch iteration epoch epoch num callback change lr validation data test image test label other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf1 12 keras input shape do not match the correct datum feed
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 1 12 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 9 0 176 gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior import tensorflow as tf from tensorflow python import keras import numpy as np r input keras layer input shape dtype tf float32 name input tensor r output keras layers lambda lambda x x 1 r input final model keras model model r input r output c input np asarray 1 ret final model predict c input print ret raise valueerror error when check input expect input tensor to have 1 dimension but get array with shape 1 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach in tf2 0 the code list above can be run correctly so I think it be a bug in tf1 12 how to fix in tf1 12 line r input keras layer input shape dtype tf float32 name input tensor can be replace to r input keras layer input shape 1 dtype tf float32 name input tensor to fix this problem
tensorflowtensorflow
tf2 0 tf signal fft2d cause an error when use tf function
Bug
tf version 2 0 0 beta1 import numpy as np import tensorflow as tf from tensorflow python keras layers import from tensorflow python keras import backend as k from tensorflow python keras model import model def build fftnet img shape 224 512 1 input f input shape img shape in lambda lambda x k mean x axis 3 1 127 5 input f in lambda lambda x tf complex tf math real x tf zero like x in float to complex fft lambda lambda x tf signal fft2d in in fftnet model input f fft name fft return fftnet img shape 224 512 1 fftnet build fftnet img shape tf function def fft blur fft true fftnet blur training false return fft true blur np zero 1 img shape np float32 fft blur tf function give the follow error when use tf signal fft2d traceback most recent call last file line 21 in fft blur file c user user anaconda3 envs tf20 lib site package tensorflow python eager def function py line 434 in call return self concrete stateful fn filter call canon args canon kwd pylint disable protect access file c user user anaconda3 envs tf20 lib site package tensorflow python eager function py line 589 in filter call t for t in nest flatten args kwargs expand composite true file c user user anaconda3 envs tf20 lib site package tensorflow python eager function py line 671 in call flat output self inference function call ctx args file c user user anaconda3 envs tf20 lib site package tensorflow python eager function py line 445 in call ctx ctx file c user user anaconda3 envs tf20 lib site package tensorflow python eager execute py line 70 in quick execute raise core symbolicexception symbolicexception
tensorflowtensorflow
tflite runtime standalone always result in modulenotfounderror no module name interpreter wrapper
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 opensuse leap 15 1 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version tflite runtime 1 14 0 python version 3 6 5 instal use virtualenv pip conda pip3 follow the instruction from bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 gpu model and memory nvidia 2080ti 11 gb describe the problem when try to install just the tensorflow lite interpreter I m able to install everything with no error and I can even import tflite runtime but when I put in from tflite runtime interpreter import interpreter I always run into a modulenotfounderror provide the exact sequence of command step that you execute before run into the problem any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach I also instal miniconda instal python 3 7 pip3 instal the correct tflite runtime for python 3 7 and run into the exact same modulenotfounderror
tensorflowtensorflow
tpu have xla compilation issue on tf 1 15rc3
Bug
system information python 3 7 3 tensorflow version 1 15 0rc3 tf env txt describe the current behavior error message I get be the follow warn tensorflow from usr local lib python3 7 site package tensorflow core python keras initializers py 119 call randomuniform init from tensorflow python op init op with dtype be deprecate and will be remove in a future version instruction for update call initializer instance with the dtype argument instead of pass it to the constructor warn tensorflow from usr local lib python3 7 site package tensorflow core python op resource variable op py 1630 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer warn tensorflow from issue py 19 the name tf train momentumoptimizer be deprecate please use tf compat v1 train momentumoptimizer instead warn tensorflow from issue py 30 the name tf session be deprecate please use tf compat v1 session instead 2019 10 08 11 14 04 056451 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2019 10 08 11 14 04 056480 e tensorflow stream executor cuda cuda driver cc 318 fail call to cuinit unknown error 303 2019 10 08 11 14 04 056497 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host cs 6000 devshell vm 8cf51c20 59fd 4dc8 ad36 465b49377d09 proc driver nvidia version do not exist 2019 10 08 11 14 04 056770 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 10 08 11 14 04 069196 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2200000000 hz 2019 10 08 11 14 04 070092 I tensorflow compiler xla service service cc 168 xla service 0x560554da1f40 initialize for platform host this do not guarantee that xla will be use device 2019 10 08 11 14 04 070105 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version warning tensorflow from issue py 31 the name tf global variable initializer be deprecate please use tf compat v1 global variable initializer instead 2019 10 08 11 14 04 154903 w tensorflow core framework op kernel cc 1651 op require fail at xla ops cc 361 invalid argument detect unsupported operation when try to compile graph cluster 3267298482081017018 on xla cpu jit unique no register unique opkernel for xla cpu jit device compatible with node node sgd momentum update embed embedding unique register device cpu t in dt bool out idx in dt int64 device cpu t in dt bool out idx in dt int32 device cpu t in dt string out idx in dt int64 device cpu t in dt string out idx in dt int32 device cpu t in dt double out idx in dt int64 device cpu t in dt double out idx in dt int32 device cpu t in dt float out idx in dt int64 device cpu t in dt float out idx in dt int32 device cpu t in dt bfloat16 out idx in dt int64 device cpu t in dt bfloat16 out idx in dt int32 device cpu t in dt half out idx in dt int64 device cpu t in dt half out idx in dt int32 device cpu t in dt int8 out idx in dt int64 device cpu t in dt int8 out idx in dt int32 device cpu t in dt uint8 out idx in dt int64 device cpu t in dt uint8 out idx in dt int32 device cpu t in dt int16 out idx in dt int64 device cpu t in dt int16 out idx in dt int32 device cpu t in dt uint16 out idx in dt int64 device cpu t in dt uint16 out idx in dt int32 device cpu t in dt int32 out idx in dt int64 device cpu t in dt int32 out idx in dt int32 device cpu t in dt int64 out idx in dt int64 device cpu t in dt int64 out idx in dt int32 device gpu t in dt int64 out idx in dt int64 device gpu t in dt int64 out idx in dt int32 device gpu t in dt int32 out idx in dt int64 device gpu t in dt int32 out idx in dt int32 node sgd momentum update embed embedding unique this error might be occur with the use of xla compile if it be not necessary that every op be compile with xla an alternative be to use auto jit with optimizeroption global jit level on 2 or the environment variable tf xla flag tf xla auto jit 2 which will attempt to use xla to compile as much of the graph as the compiler be able to traceback most recent call last file usr local lib python3 7 site package tensorflow core python client session py line 1365 in do call return fn args file usr local lib python3 7 site package tensorflow core python client session py line 1350 in run fn target list run metadata file usr local lib python3 7 site package tensorflow core python client session py line 1443 in call tf sessionrun run metadata tensorflow python framework error impl invalidargumenterror detect unsupported operation when try to compile graph cluster 3267298482081017018 on xla cpu jit unique no register unique opkernel for xla cpu jit device compatible with node node sgd momentum update embed embedding unique register device cpu t in dt bool out idx in dt int64 device cpu t in dt bool out idx in dt int32 device cpu t in dt string out idx in dt int64 device cpu t in dt string out idx in dt int32 device cpu t in dt double out idx in dt int64 device cpu t in dt double out idx in dt int32 device cpu t in dt float out idx in dt int64 device cpu t in dt float out idx in dt int32 device cpu t in dt bfloat16 out idx in dt int64 device cpu t in dt bfloat16 out idx in dt int32 device cpu t in dt half out idx in dt int64 device cpu t in dt half out idx in dt int32 device cpu t in dt int8 out idx in dt int64 device cpu t in dt int8 out idx in dt int32 device cpu t in dt uint8 out idx in dt int64 device cpu t in dt uint8 out idx in dt int32 device cpu t in dt int16 out idx in dt int64 device cpu t in dt int16 out idx in dt int32 device cpu t in dt uint16 out idx in dt int64 device cpu t in dt uint16 out idx in dt int32 device cpu t in dt int32 out idx in dt int64 device cpu t in dt int32 out idx in dt int32 device cpu t in dt int64 out idx in dt int64 device cpu t in dt int64 out idx in dt int32 device gpu t in dt int64 out idx in dt int64 device gpu t in dt int64 out idx in dt int32 device gpu t in dt int32 out idx in dt int64 device gpu t in dt int32 out idx in dt int32 node sgd momentum update embed embedding unique this error might be occur with the use of xla compile if it be not necessary that every op be compile with xla an alternative be to use auto jit with optimizeroption global jit level on 2 or the environment variable tf xla flag tf xla auto jit 2 which will attempt to use xla to compile as much of the graph as the compiler be able to cluster during handling of the above exception another exception occur traceback most recent call last file issue py line 32 in sess run loss file usr local lib python3 7 site package tensorflow core python client session py line 956 in run run metadata ptr file usr local lib python3 7 site package tensorflow core python client session py line 1180 in run feed dict tensor option run metadata file usr local lib python3 7 site package tensorflow core python client session py line 1359 in do run run metadata file usr local lib python3 7 site package tensorflow core python client session py line 1384 in do call raise type e node def op message tensorflow python framework error impl invalidargumenterror detect unsupported operation when try to compile graph cluster 3267298482081017018 on xla cpu jit unique no register unique opkernel for xla cpu jit device compatible with node node sgd momentum update embed embedding unique define at usr local lib python3 7 site package tensorflow core python framework op py 1748 register device cpu t in dt bool out idx in dt int64 device cpu t in dt bool out idx in dt int32 device cpu t in dt string out idx in dt int64 device cpu t in dt string out idx in dt int32 device cpu t in dt double out idx in dt int64 device cpu t in dt double out idx in dt int32 device cpu t in dt float out idx in dt int64 device cpu t in dt float out idx in dt int32 device cpu t in dt bfloat16 out idx in dt int64 device cpu t in dt bfloat16 out idx in dt int32 device cpu t in dt half out idx in dt int64 device cpu t in dt half out idx in dt int32 device cpu t in dt int8 out idx in dt int64 device cpu t in dt int8 out idx in dt int32 device cpu t in dt uint8 out idx in dt int64 device cpu t in dt uint8 out idx in dt int32 device cpu t in dt int16 out idx in dt int64 device cpu t in dt int16 out idx in dt int32 device cpu t in dt uint16 out idx in dt int64 device cpu t in dt uint16 out idx in dt int32 device cpu t in dt int32 out idx in dt int64 device cpu t in dt int32 out idx in dt int32 device cpu t in dt int64 out idx in dt int64 device cpu t in dt int64 out idx in dt int32 device gpu t in dt int64 out idx in dt int64 device gpu t in dt int64 out idx in dt int32 device gpu t in dt int32 out idx in dt int64 device gpu t in dt int32 out idx in dt int32 node sgd momentum update embed embedding unique define at usr local lib python3 7 site package tensorflow core python framework op py 1748 this error might be occur with the use of xla compile if it be not necessary that every op be compile with xla an alternative be to use auto jit with optimizeroption global jit level on 2 or the environment variable tf xla flag tf xla auto jit 2 which will attempt to use xla to compile as much of the graph as the compiler be able to cluster describe the expect behavior if I use tf train gradientdescentoptimizer instead it would work I expect use momentum optimizer to work as well code to reproduce the issue import os import tensorflow as tf from tensorflow python compiler xla import xla from tensorflow keras layer import embed learning rate 0 1 vocab size 100 hide size 200 max seq len 150 def my model feature label emb embed vocab size hide size feature logit tf keras layer dense unit vocab size emb crossent tf nn sparse softmax cross entropy with logit label label logit logit loss tf reduce mean crossent optimizer tf train momentumoptimizer learning rate learning rate momentum 0 9 name sgd momentum train op optimizer minimize loss return loss src tf constant 0 dtype tf int32 shape max seq len tgt tf constant 1 dtype tf int32 shape max seq len loss xla compile my model src tgt sess tf session sess run tf global variable initializer sess run loss sess close other info log issue occur when use kera embed layer alongside an optimizer that be not sgd the issue can be fix by modify keras implementation by change tf gather weight input to tf gather tf multiply weight 1 input the issue be in the weight be an indexedslice which cause any non sgd optimizer to use the unique operation which be unsupported by xla however if you do tf multiply weight 1 it make that expression the indexedslice and weight itself doesn t become an indexedslice but rather a tensor so the optimizer implementation in tensorflow doesn t call unique
tensorflowtensorflow
mask lstm op require fail at cudnn rnn op cc 1498 unknown cudnn status bad param
Bug
system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 2 lts tensorflow instal from source or binary binary pip tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version python 3 7 3 cuda cudnn version cuda 10 0 cudnn 7 6 2 24 1 gpu model and memory quadro rtx 6000 major 7 minor 5 memoryclockrate ghz 1 77 describe the problem it seem there be an issue with the cudnn lstm implementation when use a tf keras layer mask layer batch size 256 num tstep 144 num feature 130 num unit 88 model tf keras sequential tf keras layers inputlayer input shape num tstep num feature batch size batch size tf keras layer mask mask value 0 0 input shape num tstep num feature tf keras layers lstm num unit batch input shape batch size num tstep num feature return sequence true stateful false tf keras layer timedistribute tf keras layer dense 1 tf keras layers activation sigmoid similar to 33069 I receive this error during training and I have strictly right padded datum I be do trimming and right padding manually however in contrast to this issue I confirm that I do not have any input contain only zero via the follow snippet for I e in enumerate ds train re f l x numpy for x in e for j in range f shape 0 if not f j 0 0 all re append 1 else re append 0 fin re 0 for e in re 1 if e fin 1 fin append e print I format I fin result I 0 1 0 I 1 1 0 I 2 1 0 I 3 1 0 I 4 1 I 5 1 0 if I remove the mask layer the error do not occur I confirm this by run a complete epoch 2324 batch however the training be probably pretty pointless when include the padded datum be there any other pitfall that I be miss that could cause this issue source code log python output epoch 1 1000 warning tensorflow early stop condition on metric val loss which be not available available metric be cancellederrortraceback most recent call last in 1 m fit train true ws tf vol local model lstm py in fit self train verbose 315 316 self model fit ds train epoch num epoch verbose verbose shuffle false 317 validation datum ds val validation step none callback cbs 318 self model save sess hdf5 path 319 self model save weight self sess h5 path as posix ws miniconda3 lib python3 7 site package tensorflow core python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 726 max queue size max queue size 727 worker worker 728 use multiprocesse use multiprocesse 729 730 def evaluate self ws miniconda3 lib python3 7 site package tensorflow core python keras engine training v2 py in fit self model x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq kwargs 322 mode modekey train 323 training context training context 324 total epoch epoch 325 cbks make logs model epoch log training result modekeys train 326 ws miniconda3 lib python3 7 site package tensorflow core python keras engine training v2 py in run one epoch model iterator execution function dataset size batch size strategy step per epoch num sample mode training context total epoch 121 step step mode mode size current batch size as batch log 122 try 123 batch out execution function iterator 124 except stopiteration error outofrangeerror 125 todo kaftan file bug about tf function and error outofrangeerror ws miniconda3 lib python3 7 site package tensorflow core python keras engine training v2 util py in execution function input fn 84 numpy translate tensor to value in eager mode 85 return nest map structure non none constant value 86 distribute function input fn 87 88 return execution function ws miniconda3 lib python3 7 site package tensorflow core python eager def function py in call self args kwd 455 456 trace count self get trace count 457 result self call args kwd 458 if trace count self get trace count 459 self call counter call without trace ws miniconda3 lib python3 7 site package tensorflow core python eager def function py in call self args kwd 518 lifting succeed so variable be initialize and we can run the 519 stateless function 520 return self stateless fn args kwd 521 else 522 canon args canon kwd ws miniconda3 lib python3 7 site package tensorflow core python eager function py in call self args kwargs 1821 call a graph function specialize to the input 1822 graph function args kwargs self maybe define function args kwargs 1823 return graph function filter call args kwargs pylint disable protect access 1824 1825 property ws miniconda3 lib python3 7 site package tensorflow core python eager function py in filter call self args kwargs 1139 if isinstance t op tensor 1140 resource variable op baseresourcevariable 1141 self capture input 1142 1143 def call flat self args capture input cancellation manager none ws miniconda3 lib python3 7 site package tensorflow core python eager function py in call flat self args capture input cancellation manager 1222 if execute eagerly 1223 flat output forward function call 1224 ctx args cancellation manager cancellation manager 1225 else 1226 gradient name self delay rewrite function register ws miniconda3 lib python3 7 site package tensorflow core python eager function py in call self ctx args cancellation manager 509 input args 510 attrs executor type executor type config proto config 511 ctx ctx 512 else 513 output execute execute with cancellation ws miniconda3 lib python3 7 site package tensorflow core python eager execute py in quick execute op name num output input attrs ctx name 65 else 66 message e message 67 six raise from core status to exception e code message none 68 except typeerror as e 69 keras symbolic tensor ws miniconda3 lib python3 7 site package six py in raise from value from value cancellederror derive recvasync be cancel node metric accuracy broadcast weight assert broadcastable assertguard else 36 assert datum 2 62 loss activation loss weight loss broadcast weight assert broadcastable be valid shape else 1 have valid nonscalar shape then 106 have invalid dim concat 28 op inference distribute function 172102 function call stack distribute function command line log 2019 10 08 14 38 27 367875 w tensorflow core grappler optimizer implementation selector cc 310 skip optimization due to error while loading function librarie invalid argument function inference backward cudnn lstm with fallback 169668 171093 and inference backward cudnn lstm with fallback 169668 171093 specialized for statefulpartitionedcall at inference distribute function 172102 both implement lstm dce676f4 acdd 4bb5 88d9 e8dd57573aba but their signature do not match 2019 10 08 14 38 27 536666 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 10 08 14 38 39 982582 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 10 08 14 38 41 215567 w tensorflow core framework op kernel cc 1622 op require fail at cudnn rnn op cc 1498 unknown cudnn status bad param in tensorflow stream executor cuda cuda dnn cc 1424 cudnnsetrnndatadescriptor datum desc get datum type layout max seq length batch size data size seq length array void padding fill 2019 10 08 14 38 41 215616 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort unknown cudnn status bad param in tensorflow stream executor cuda cuda dnn cc 1424 cudnnsetrnndatadescriptor datum desc get datum type layout max seq length batch size data size seq length array void padding fill node cond 64 then 0 cudnnrnnv3 2019 10 08 14 38 41 215638 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort cancel derive recvasync be cancel node metric accuracy broadcast weight assert broadcastable assertguard else 36 assert datum 2 62 loss activation loss weight loss broadcast weight assert broadcastable be valid shape else 1 have valid nonscalar shape then 106 have invalid dim concat 28 2019 10 08 14 38 41 215693 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort cancel derive recvasync be cancel node metric accuracy broadcast weight assert broadcastable assertguard else 36 assert datum 2 62
tensorflowtensorflow
regularisation loss in nest layer
Bug
system information have I write custom code yes os platform and distribution manjaro test x86 64 tensorflow instal from source or binary pypi binary tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 7 4 describe the current behavior tensorflow return duplicate regularisation loss for layer which hold reference to regularise variable build in other layer describe the expect behavior return a single regularisation loss per variable code to reproduce the issue import tensorflow as tf class a tf keras layers layer def init self layer super a self init self layer layer def call self input return self layer input class b tf keras layers layer def init self super b self init self obj tf keras layer dense 13 kernel regularizer tf keras regularizer l1 5 self layerb a self obj def call self input return self layerb input model b output model tf one 5 10 print len model loss once we rename self obj to obj I e not save it as a class member we end up with a single regularisation loss other info log more info and log in plus guillaumekln s response issuecomment 539530375
tensorflowtensorflow
tf 2 0 api docs tf nn softmax
Bug
url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description yes correct link yes parameter define no per the code the first argument logit can be of any type that can be pass to convert to tensor not just a tensor therefore the documentation can be modify to include tensor object numpy array python list and python scalar return define yes raise list and define yes submit a pull request yes
tensorflowtensorflow
tf datum dataset interleave do not seem to respect num parallel call argument
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux 5 3 1 arch1 1 arch gnu linux tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 7 4 cuda cudnn version n a gpu model and memory none describe the current behavior while use tf datum dataset interleave on a dataset of 9 element with the follow argument cycle length 2 block length 1 num parallel call 2 6 thread seem to be launch concurrently describe the expect behavior I expect the function to be call concurrently by pair call 1 call 2 call 3 call 4 call 5 call 6 call 7 call 8 call 9 code to reproduce the issue python import time import timeit import tensorflow as tf from tensorflow python data op import dataset op from tensorflow core python data op reader import create or validate filename dataset create dataset reader from tensorflow python framework import tensor spec clone of tfrecorddataset to add some customization see creator fn class mytfrecorddataset dataset op datasetv2 def input self return self impl input pylint disable protect access property def element spec self return tensor spec tensorspec tf dtype string def init self filename compression type none buffer size none num parallel read none filename create or validate filename dataset filename self filename filename self compression type compression type self buffer size buffer size self num parallel read num parallel read def creator fn filename tf print creator fn filename numpy start time perf counter time sleep 0 5 result tf constant str x for x in range 7 tf print read time time perf counter start return result self impl create dataset reader lambda x tf datum dataset from tensor slice tf py function creator fn x tf string filename num parallel read variant tensor self impl variant tensor pylint disable protect access super mytfrecorddataset self init variant tensor def dataset interleave ds ds tf datum dataset from tensor slice str x for x in range 9 interleave mytfrecorddataset cycle length 2 block length 1 num parallel call 2 return ds def main ds dataset interleave ds def iterate tf print iterate for I s in ds enumerate tf print iteration I s shape time sleep 0 0 tf print timeit timeit iterate number 1 if name main main other info log with num parallel call set to none I obtain the expect behavior iterate creator fn b 0 read time 0 5008154709994415 iteration 0 tensorshape creator fn b 1 reading time 0 5007261740011018 iteration 1 tensorshape two file load iterate over they iteration 2 tensorshape iteration 3 tensorshape iteration 4 tensorshape iteration 5 tensorshape iteration 6 tensorshape iteration 7 tensorshape iteration 8 tensorshape iteration 9 tensorshape iteration 10 tensorshape iteration 11 tensorshape iteration 12 tensorshape iteration 13 tensorshape end of sequence open new file creator fn b 2 reading time 0 500802348000434 iteration 14 tensorshape creator fn b 3 reading time 0 5007076660003804 iteration 15 tensorshape iteration 16 tensorshape iteration 17 tensorshape iteration 18 tensorshape iteration 19 tensorshape iteration 20 tensorshape iteration 21 tensorshape iteration 22 tensorshape iteration 23 tensorshape iteration 24 tensorshape iteration 25 tensorshape iteration 26 tensorshape iteration 27 tensorshape and so on creator fn b 4 reading time 0 5007231710005726 iteration 28 tensorshape creator fn b 5 reading time 0 5007872259993746 4 591206809000141 total time be 4 5 second accord the 0 5s sleep execute 9 time with num parallel call set to 2 I expect to divide my execution time by 2 each pair of file be read in parallel iterate creator fn b 0 creator fn b 1 creator fn b 3 creator fn b 5 creator fn b 4 creator fn b 2 reading time 0 5008060520012805 read time 0 5010762649999378 read time 0 501729752000756 read time 0 5024584279999544 read time 0 503098459001194 read time 0 5040528110002924 6 file open in parallel iteration 0 tensorshape iteration 14 tensorshape creator fn b 7 creator fn b 6 iteration 15 tensorshape iteration 28 tensorshape creator fn b 8 iteration 29 tensorshape iteration 41 tensorshape reading time 0 5002393860013399 read time 0 5006544690004375 iteration 42 tensorshape iteration 43 tensorshape iteration 44 tensorshape iteration 45 tensorshape iteration 46 tensorshape reading time 0 5014183660005074 iteration 47 tensorshape iteration 62 tensorshape 1 047351510000226 execution time divide by 4 by the way what be the meaning of num parallel call set to 1 1 parallel call be 2 thread or it should be sequential
tensorflowtensorflow
prediction on batch value vary with batch size
Bug
when I predict on batch e g with predict on batch I get different prediction depend on the batch size I be use the differnce be sometimes as early as on the 4th digit at first I think it be due to batch normalization but then I try with pretraine vgg16 which I believe do not include batch normalization this hold on all system setting I have tired ubuntu and window tf 1 1 1 4 and 2 0 pre train on imagenet and train from scratch inception v3 pre train on imagenet vgg16 code bellow predict on batch flow from directory and flow from dataframe since it be constistent on all system I guess it be not a bug but a know artefact of tf I have google och search the repository but not find out why this happen python import numpy as np import tensorflow as tf model tf keras application vgg16 input shape 224 224 3 include top true weight imagenet path path to some image use any image you get img tf keras preprocesse image load img path target size 224 224 batch size 32 change this to say 3 give different prediction batch holder np zero batch size 224 224 3 for I in range batch size batch holder I img prediction model predict on batch batch holder print prediction 0
tensorflowtensorflow
tpu model fit 2 gb of ram limit
Bug
describe the current behavior there be an error session be not find when try to use a dataset with the size more than 2 gb of ram error be reproducible for different step size 256 1024 8192 describe the expect behavior no error when feed a dataset which be more than 2 gb of ram code to reproduce the issue to reproduce just copy the follow code to colab with tpu enable import tensorflow as tf import numpy as np import distutil if distutil version looseversion tf version 1 14 raise exception this notebook be compatible with tensorflow 1 14 or high for tensorflow 1 13 or low please use the previous version at import os resolver tf contrib cluster resolver tpuclusterresolver grpc os environ colab tpu addr tf contrib distribute initialize tpu system resolver strategy tf contrib distribute tpustrategy resolver optimizer tf contrib tpu crossshardoptimizer tf train gradientdescentoptimizer 0 01 with strategy scope model tf keras application vgg16 input shape 32 32 3 class 10 weight none create model model compile optimizer optimizer loss mse metric mse 1024 192 32 32 3 4bytes 2 415 919 104 byte x np zero 1024 192 32 32 3 dtype np float32 y np one 1024 192 10 dtype np float32 model fit x y epoch 10 step per epoch 1024 batch size be 192 per tpu other info log abortederror traceback most recent call last usr local lib python3 6 dist package tensorflow python client session py in do call self fn args 1355 try 1356 return fn args 1357 except error operror as e 13 frame abortederror session 14acf315e768e91c be not find during handling of the above exception another exception occur abortederror traceback most recent call last usr local lib python3 6 dist package tensorflow python client session py in do call self fn args 1368 pass 1369 message error interpolation interpolate message self graph 1370 raise type e node def op message 1371 1372 def extend graph self abortederror session 14acf315e768e91c be not find print tf git version v1 14 0 0 g87989f6959 print tf version 1 14 0
tensorflowtensorflow
possible file name typo in raspberry pi example
Bug
url s with the issue description of issue what need change near the bottom of the instruction on speed up inference it read python3 classify picamera py base on the file in that example folder should that instead read python3 detect picamera py clear description if my assumption above be incorrect then its unclear where the file classify pycamera py be come from and should maybe be explicitly mention submit a pull request I do not plan to since its possibly a simple typo but I can if you d like
tensorflowtensorflow
tensorflow 2 0 0 tf keras layer timedistribute layer can t be save to save model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution colaboratory gpu runtime tensorflow version 2 0 0 python version 3 6 8 describe the current behavior the model define which have tf keras layer timedistribute layer can not be save by model save function it show the error below valueerror traceback most recent call last usr lib python3 6 inspect py in getfullargspec func 1125 skip bind arg false 1126 sigcls signature 1127 except exception as ex 41 frame valueerror no signature find for builtin the above exception be the direct cause of the follow exception typeerror traceback most recent call last usr lib python3 6 inspect py in getfullargspec func 1130 else so to be fully backwards compatible we catch all 1131 possible exception here and reraise a typeerror 1132 raise typeerror unsupported callable from ex 1133 1134 args typeerror unsupported callable describe the expect behavior in tensorflow 2 0 0 suppose the model save model default to be save in savedmodel format code to reproduce the issue mirror strategy tf distribute mirroredstrategy def get datum dataset ds info tfds load name mnist with info true as supervise true mnist train mnist test dataset train dataset test buffer size 10000 batch size per replica 64 batch size batch size per replica mirror strategy num replicas in sync def scale image label image tf cast image tf float32 image 255 return image label train dataset mnist train map scale cache shuffle buffer size batch batch size eval dataset mnist test map scale batch batch size return train dataset eval dataset def get model with mirror strategy scope model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer timedistribute tf keras layer dense 64 activation softmax tf keras layer timedistribute tf keras layer dense 64 activation softmax tf keras layer dense 10 activation softmax model compile loss sparse categorical crossentropy optimizer tf keras optimizer adam metric accuracy return model model get model model save test save other info log the not only can be reproduce in colaboratory but also in normal ubuntu machien which instal with tensorflow gpu 2 0 0
tensorflowtensorflow
tf silently round to integer when convert to pil format
Bug
system information have I write custom code not really no os platform and distribution linux ubuntu 18 04 2 lts bionic beaver tensorflow instal from source or binary pip install binary tensorflow version use command below 1 14 0 python version 3 5 2 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version run on cpu only gpu model and memory run on cpu only describe the current behavior when convert to pil format from array and back tf silently round value to integer see describe the expect behavior throw a warning when silently round maybe convert to float instead of integer if input be a float dtype code to reproduce the issue please see the short example test case here in particular see cell 4 random noise image 4800 unique value vs cell 6 image after conversion to pil and back 256 unique value other info log happy to provide if necessary
tensorflowtensorflow
tf keras accept incorrect cnn input shape from tf datum dataset when eager execution be disable
Bug
system information have I write custom code yes os platform and distribution ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 7 device cpu describe the current behavior I be use a tf datum dataset to train a tf keras model my cnn input layer size be 72 96 1 I find that if I input a tensor with size 96 72 1 note the reverse dimension training complete successfully however if I enable eager execution a valueerror be produce as expect describe the expect behavior a valueerror should be produce in response to the incorrect input shape whether or not eager execution be enable code to reproduce the issue note with the code below there be no error message however uncomment the tf enable eager execution line and you will get a valueerror usr bin env python3 import numpy as np import tensorflow as tf from tensorflow keras import layer uncomment this line and you will get an error tf enable eager execution def tf dataset def datum gen while true yield np random rand 72 96 1 np random rand 18 type tf float64 tf float64 shape tf tensorshape none none none tf tensorshape none dataset tf datum dataset from generator data gen type output shape shape dataset dataset batch 16 return dataset dataset tf dataset model tf keras sequential notice how the input dimension be mismatch model add layer inputlayer 96 72 1 model add layer conv2d filter 32 kernel size 4 stride 4 activation relu model add layer conv2d filter 32 kernel size 4 stride 4 activation relu model add layer flatten model add layer dense 18 activation softmax model compile optimizer adam loss categorical crossentropy print model summary model fit dataset step per epoch 1 epoch 1
tensorflowtensorflow
size and combinednms op support request for tflite
Bug
system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary pip tensorflow version or github sha if from source 2 0 0 provide the text output from tflite convert 2019 10 04 16 17 28 794407 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2019 10 04 16 17 28 798220 e tensorflow stream executor cuda cuda driver cc 318 fail call to cuinit cuda error no device no cuda capable device be detect 2019 10 04 16 17 28 798242 I tensorflow stream executor cuda cuda diagnostic cc 169 retrieve cuda diagnostic information for host wd ai lab 2019 10 04 16 17 28 798246 I tensorflow stream executor cuda cuda diagnostic cc 176 hostname wd ai lab 2019 10 04 16 17 28 798282 I tensorflow stream executor cuda cuda diagnostic cc 200 libcuda report version be 430 26 0 2019 10 04 16 17 28 798299 I tensorflow stream executor cuda cuda diagnostic cc 204 kernel report version be 396 51 0 2019 10 04 16 17 28 798303 e tensorflow stream executor cuda cuda diagnostic cc 313 kernel version 396 51 0 do not match dso version 430 26 0 can not find working device in this configuration 2019 10 04 16 17 28 798415 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 10 04 16 17 28 821484 I tensorflow core platform profile util cpu util cc 94 cpu frequency 4008000000 hz 2019 10 04 16 17 28 822257 I tensorflow compiler xla service service cc 168 xla service 0x5563ee4031d0 execute computation on platform host device 2019 10 04 16 17 28 822268 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 10 04 16 17 32 842974 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 0 2019 10 04 16 17 32 843029 I tensorflow core grappler cluster single machine cc 356 start new session 2019 10 04 16 17 32 858437 I tensorflow core grappler optimizer meta optimizer cc 716 optimization result for grappler item graph to optimize 2019 10 04 16 17 32 858454 I tensorflow core grappler optimizer meta optimizer cc 718 function optimizer function optimizer do nothing time 0 003ms 2019 10 04 16 17 32 858458 I tensorflow core grappler optimizer meta optimizer cc 718 function optimizer function optimizer do nothing time 0ms 2019 10 04 16 17 35 240170 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 0 2019 10 04 16 17 35 240264 I tensorflow core grappler cluster single machine cc 356 start new session 2019 10 04 16 17 35 995629 I tensorflow core grappler optimizer meta optimizer cc 716 optimization result for grappler item graph to optimize 2019 10 04 16 17 35 995652 I tensorflow core grappler optimizer meta optimizer cc 718 constant folding graph size after 1001 node 358 2553 edge 358 time 346 923ms 2019 10 04 16 17 35 995656 I tensorflow core grappler optimizer meta optimizer cc 718 constant folding graph size after 1001 node 0 2553 edge 0 time 114 777m traceback most recent call last file home wd ai git pkgs yolov3 tf2 convert tflite py line 29 in app run main file home wd ai miniconda3 envs yolov3 tf2 gpu lib python3 7 site package absl app py line 299 in run run main main args file home wd ai miniconda3 envs yolov3 tf2 gpu lib python3 7 site package absl app py line 250 in run main sys exit main argv file home wd ai git pkgs yolov3 tf2 convert tflite py line 24 in main tflite model converter convert file home wd ai miniconda3 envs yolov3 tf2 gpu lib python3 7 site package tensorflow core lite python lite py line 446 in convert converter kwargs file home wd ai miniconda3 envs yolov3 tf2 gpu lib python3 7 site package tensorflow core lite python convert py line 449 in toco convert impl enable mlir converter enable mlir converter file home wd ai miniconda3 envs yolov3 tf2 gpu lib python3 7 site package tensorflow core lite python convert py line 200 in toco convert protos raise convertererror see console for info n s n s n stdout stderr tensorflow lite python convert convertererror see console for info 2019 10 04 16 17 37 744297 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation size 2019 10 04 16 17 37 744348 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation size 2019 10 04 16 17 37 744519 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation size 2019 10 04 16 17 37 744542 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation size 2019 10 04 16 17 37 744676 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation size 2019 10 04 16 17 37 744683 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation size 2019 10 04 16 17 37 744759 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation combinednonmaxsuppression 2019 10 04 16 17 37 757406 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 743 operator 1376 array 0 quantize 2019 10 04 16 17 37 767372 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 743 operator 1376 array 0 quantize 2019 10 04 16 17 38 148616 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 284 operator 533 array 0 quantize 2019 10 04 16 17 38 152237 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 2 284 operator 533 array 0 quantize 2019 10 04 16 17 38 155833 I tensorflow lite toco graph transformation graph transformation cc 39 before group bidirectional sequence lstm rnn 284 operator 533 array 0 quantize 2019 10 04 16 17 38 158553 I tensorflow lite toco graph transformation graph transformation cc 39 before dequantization graph transformation 284 operator 533 array 0 quantize 2019 10 04 16 17 38 164271 I tensorflow lite toco allocate transient array cc 345 total transient array allocate size 66560128 byte theoretical optimal value 44408960 byte 2019 10 04 16 17 38 165392 e tensorflow lite toco toco tooling cc 466 we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add cast concatenation conv 2d div exp expand dim fill leaky relu logistic mul pack pad reshape resize near neighbor shape split v stride slice sub here be a list of operator for which you will need custom implementation combinednonmaxsuppression size traceback most recent call last file home wd ai miniconda3 envs yolov3 tf2 gpu bin toco from protos line 10 in sys exit main file home wd ai miniconda3 envs yolov3 tf2 gpu lib python3 7 site package tensorflow core lite toco python toco from protos py line 89 in main app run main execute argv sys argv 0 unparse file home wd ai miniconda3 envs yolov3 tf2 gpu lib python3 7 site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home wd ai miniconda3 envs yolov3 tf2 gpu lib python3 7 site package absl app py line 299 in run run main main args file home wd ai miniconda3 envs yolov3 tf2 gpu lib python3 7 site package absl app py line 250 in run main sys exit main argv file home wd ai miniconda3 envs yolov3 tf2 gpu lib python3 7 site package tensorflow core lite toco python toco from protos py line 52 in execute enable mlir converter exception we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add cast concatenation conv 2d div exp expand dim fill leaky relu logistic mul pack pad reshape resize near neighbor shape split v stride slice sub here be a list of operator for which you will need custom implementation combinednonmaxsuppression size also please include a link to a graphdef or the model if possible any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
input pipeline guide refactore
Bug
url s with the issue well performance with tf data description of issue what need change currently this guide seem to be the main documentation source for tf data usage however the different step show do not seem to be optimal for example tfrecorddataset usage do not match the actual api see 33048 usage of interleave with tfrecorddataset be redoundant see so post submit a pull request no pr intente for the moment as I think this require some discussion if I miss something or whatever
tensorflowtensorflow
tfrecorddataset constructor do not support globbe pattern
Bug
url s with the issue well performance with data structure of an input pipeline description of issue what need change the code example provide instantiate a tf datum tfrecorddataset pass a globbing pattern path to dataset train tfrecord while it do not support it the example should be update rely on tf datum dataset list file list file diff dataset tf datum tfrecorddataset path to dataset train tfrecord dataset tf datum tfrecorddataset tf data dataset list file path to dataset train tfrecord submit a pull request I will submit a pull request soon
tensorflowtensorflow
tpu support in tensorflow 2 0 release
Bug
from the following link I understand that tpu training be support in tensorflow 2 0 I follow the snippet code provide in the same page cluster resolver tf distribute cluster resolver tpuclusterresolver tpu tpu address tf config experimental connect to cluster cluster resolver tf tpu experimental initialize tpu system cluster resolver tpu strategy tf distribute experimental tpustrategy cluster resolver and I get the follow error invalidargumenterror unable to find a context i d match the specify one 7989870214237460624 perhaps the worker be restart or the context be gc d additional grpc error information create 1570180842 964900283 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message unable to find a context i d match the specify one 7989870214237460624 perhaps the worker be restart or the context be gc d grpc status 3 same error be throw when I didn t provide the tpu address as it be state in the above link the tpuclusterresolver instance help locate the tpus in colab you don t need to specify any argument to it the test be do in google colab and I select the tpu accelerator I instal tensorflow gpu with pip install tensorflow gpu if tpu be not yet support in tf 2 0 when it be plane to be add
tensorflowtensorflow
dump xla compile result in specific directory not work
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution cento 7 6 x86 64 tensorflow instal from source or binary tensorflow version use command below r1 12 3 python version 3 6 7 bazel version if compile from source 0 19 1 gcc compiler version if compile from source gcc 7 4 0 cuda cudnn version cuda 10 1 cudnn 7 5 gpu model and memory tesla v100 pcie 32 gb titan xp 12 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior xla compile model file be not save in the desire directory even though I declare the xla dump directory in an explicit manner accord to this link a compile model must be save as module xxxx ptx or something which be not visible meanwhile I only can see the graph description about the model under tmp even though I do already declare a different directory far from tmp ls tmp before mark for compilation 1 pbtxt before mark for compilation 2 pbtxt before mark for compilation 3 pbtxt before mark for compilation 4 pbtxt before mark for compilation 5 pbtxt before mark for compilation 6 pbtxt before mark for compilation pbtxt mark for compilation 1 pbtxt mark for compilation 2 pbtxt mark for compilation 3 pbtxt mark for compilation 4 pbtxt mark for compilation 5 pbtxt mark for compilation 6 pbtxt mark for compilation pbtxt describe the expect behavior xla compile module file may be save in somewhere xladump ls somewhere xladump code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem tf xla flag tf xla auto jit 2 tf xla cpu global jit tf xla cluster debug tf dump graph prefix somewhere xladump xla flag dump hlo as text xla dump to somewhere xladump pythondontwritebytecode 1 time python blahblah py other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach build flag build with ignite y build with xla jit y runtime log 2019 10 04 15 10 36 868003 I tensorflow stream executor cuda cuda gpu executor cc 964 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 10 04 15 10 36 868683 I tensorflow core common runtime gpu gpu device cc 1432 find device 0 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 911 pcibusid 0000 09 00 0 totalmemory 11 91gib freememory 11 75gib 2019 10 04 15 10 36 868705 I tensorflow core common runtime gpu gpu device cc 1511 add visible gpu device 0 2019 10 04 15 10 37 195028 I tensorflow core common runtime gpu gpu device cc 982 device interconnect streamexecutor with strength 1 edge matrix 2019 10 04 15 10 37 195073 I tensorflow core common runtime gpu gpu device cc 988 0 2019 10 04 15 10 37 195082 I tensorflow core common runtime gpu gpu device cc 1001 0 n 2019 10 04 15 10 37 195174 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 11367 mb memory physical gpu device 0 name titan xp pci bus i d 0000 09 00 0 compute capability 6 1 2019 10 04 15 10 37 784171 I tensorflow compiler tf2xla dump graph cc 79 dump graphdef to tmp before mark for compilation pbtxt 2019 10 04 15 10 37 794203 I tensorflow compiler tf2xla dump graph cc 79 dump graphdef to tmp mark for compilation pbtxt 2019 10 04 15 10 37 847682 I tensorflow compiler xla service service cc 149 xla service 0x7f237c001c70 execute computation on platform cuda device 2019 10 04 15 10 37 847749 I tensorflow compiler xla service service cc 157 streamexecutor device 0 titan xp compute capability 6 1 2019 10 04 15 10 37 903711 w tensorflow compiler xla service gpu nvptx compiler cc 402 warning you be use ptxas 10 0 145 which be old than 9 2 88 ptxas 9 x before 9 2 88 be know to miscompile xla code lead to incorrect result or invalid address error you do not need to update to cuda 9 2 88 cherry pick the ptxas binary be sufficient 2019 10 04 15 10 40 008334 I tensorflow compiler tf2xla dump graph cc 79 dump graphdef to tmp before mark for compilation 1 pbtxt 2019 10 04 15 10 40 009382 I tensorflow compiler tf2xla dump graph cc 79 dump graphdef to tmp mark for compilation 1 pbtxt 2019 10 04 15 10 40 548377 I tensorflow compiler tf2xla dump graph cc 79 dump graphdef to tmp before mark for compilation 2 pbtxt 2019 10 04 15 10 40 549325 I tensorflow compiler tf2xla dump graph cc 79 dump graphdef to tmp mark for compilation 2 pbtxt 2019 10 04 15 10 40 783753 I tensorflow compiler tf2xla dump graph cc 79 dump graphdef to tmp before mark for compilation 3 pbtxt 2019 10 04 15 10 40 784706 I tensorflow compiler tf2xla dump graph cc 79 dump graphdef to tmp mark for compilation 3 pbtxt I tf xla flag tf xla auto jit 2 tf xla cpu global jit tf xla cluster debug I xla flag dump hlo as text xla dump to sponge I tf dump graph prefix sponge
tensorflowtensorflow
memory leak on tf 2 0 with model predict or and model fit with kera
Bug
system information os platform system version macos 10 14 6 18g103 kernel version darwin 18 7 0 tensorflow instal from binary use pip install tensorflow python version python python v python 3 7 3 gpu model and memory no gpu tensorflow version python python c import tensorflow as tf print tf version version 2 0 0 describe the current behavior while run use tensorflow 1 14 or theano backend this code work fine after upgrade to tensorflow 2 0 0 it stop work and memory usage increase without finish the program describe the expect behavior use theano I get 28 second by iteration use tensorflow 2 0 0 I expect same behavior or well code to reproduce the issue python import gym import numpy as np import matplotlib pylab as plt import tensorflow as tf from tensorflow keras import layer env gym make nchain v0 def q learn keras env num episode 1000 create the keras model model tf keras sequential model add layers inputlayer batch input shape 1 5 model add layer dense 10 activation sigmoid model add layer dense 2 activation linear model compile loss mse optimizer adam metric mae now execute the q learn y 0 95 eps 0 5 decay factor 0 999 r avg list for I in range num episode s env reset eps decay factor if I 100 0 print episode of format I 1 num episode do false r sum 0 while not do if np random random ep a np random randint 0 2 else a np argmax model predict np identity 5 s s 1 new s r do env step a target r y np max model predict np identity 5 new s new s 1 target vec model predict np identity 5 s s 1 0 target vec a target model fit np identity 5 s s 1 target vec reshape 1 2 epoch 1 verbose 0 s new s r sum r r avg list append r sum 1000 plt plot r avg list plt ylabel average reward per game plt xlabel number of game plt show for I in range 5 print state action format I model predict np identity 5 I I 1 if name main q learn keras env
tensorflowtensorflow
error config value android arm64 be not define in any rc file
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version 1 15 python version 3 6 instal use virtualenv pip conda virtual pip bazel version if compile from source 0 26 gcc compiler version if compile from source cuda cudnn version gpu model and memory when run this bazel build c opt config android arm64 cxxopt std c 11 tensorflow lite tool evaluation task coco object detection run eval get error error config value android arm64 be not define in any rc file how can I fix it
tensorflowtensorflow
tf function fail with tf ragged boolean mask
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution window 10 tensorflow version v2 0 0 rc2 26 g64c3d382ca 2 0 0 python version 3 7 4 cuda cudnn version none gpu model and memory no gpu describe the current behavior a function contain tf ragged boolean mask and decorate with tf function work on first execution but fail when execute with different input set experimental relax shape true do not help code to reproduce the issue python import tensorflow as tf define some raggedtensor print a2 and b2 be ragged tensor with batch length 2 a2 tf ragged constant 1 2 3 4 5 dtype tf float32 ragged rank 1 inner shape tuple print a2 a2 b2 tf ragged constant 1 2 3 dtype tf float32 ragged rank 1 inner shape tuple print b2 b2 print a3 and b3 be ragged tensor with batch length 3 a3 tf ragged constant 1 2 3 4 5 3 dtype tf float32 ragged rank 1 inner shape tuple print a3 a3 b3 tf ragged constant 1 2 3 5 dtype tf float32 ragged rank 1 inner shape tuple print b3 b3 define a function print we define a function fun with tf ragged boolean mask def fun x maximum tf reduce max x axis 1 mask maximum 4 selection tf ragged boolean mask x mask return tf reduce sum selection run the function in eager mode print run in eager mode print fun a2 fun a2 print fun a3 fun a3 print fun b2 fun b2 print fun b3 fun b3 now run the same in graph mode fun tf function fun experimental relax shape true print run in graph mode print fun a2 fun a2 print fun a3 fun a3 print fun b2 fun b2 print fun b3 fun b3 output of the code a2 and b2 be ragged tensor with batch length 2 2019 10 04 10 19 32 835325 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 a2 b2 a3 and b3 be ragged tensor with batch length 3 a3 b3 we define a function fun with tf ragged boolean mask run in eager mode fun a2 tf tensor 9 0 shape dtype float32 fun a3 tf tensor 9 0 shape dtype float32 fun b2 tf tensor 0 0 shape dtype float32 fun b3 tf tensor 5 0 shape dtype float32 run in graph mode fun a2 tf tensor 9 0 shape dtype float32 fun a3 tf tensor 9 0 shape dtype float32 2019 10 04 10 19 33 187674 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument input to reshape be a tensor with 3 value but the request shape have 5 node raggedmask raggedmask boolean mask reshape traceback most recent call last file bug in tensorflow tf function on ragged boolean mask py line 43 in print fun b2 fun b2 file d python project workon hrab2 wpy64 3740 python 3 7 4 amd64 lib site package tensorflow core python eager def function py line 457 in call result self call args kwd file d python project workon hrab2 wpy64 3740 python 3 7 4 amd64 lib site package tensorflow core python eager def function py line 494 in call result self stateful fn args kwd file d python project workon hrab2 wpy64 3740 python 3 7 4 amd64 lib site package tensorflow core python eager function py line 1823 in call return graph function filter call args kwargs pylint disable protect access file d python project workon hrab2 wpy64 3740 python 3 7 4 amd64 lib site package tensorflow core python eager function py line 1141 in filter call self capture input file d python project workon hrab2 wpy64 3740 python 3 7 4 amd64 lib site package tensorflow core python eager function py line 1224 in call flat ctx args cancellation manager cancellation manager file d python project workon hrab2 wpy64 3740 python 3 7 4 amd64 lib site package tensorflow core python eager function py line 511 in call ctx ctx file d python project workon hrab2 wpy64 3740 python 3 7 4 amd64 lib site package tensorflow core python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror input to reshape be a tensor with 3 value but the request shape have 5 node raggedmask raggedmask boolean mask reshape define at d python project workon hrab2 wpy64 3740 python 3 7 4 amd64 lib site package tensorflow core python framework op py 1751 op inference fun 1126 function call stack fun
tensorflowtensorflow
memory leak
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes see below os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 also demonstrate in windows 7 tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 problem disappear on v1 14 0 rc1 22 gaf24dc91b5 1 14 0 python version 3 6 8 cuda cudnn version cuda v10 0 cudnn v7 6 2 24 gpu model and memory nvidia geforce 840 m but problem persist in non gpu version describe the current behavior when create a trivially simple model and then enter a loop that call predict with dummy input memory consumption increase indefinitely over time on my system a model with a single hidden layer of only 32 node will consume all available system ram 10 gb after only 10 minute the problem happen on v2 0 gpu or cpu of tensorflow but disappear when run identical code on v1 14 describe the expect behavior expect memory consumption to quickly stabilize but it never do code to reproduce the issue from tensorflow keras import input model from tensorflow keras layer import dense import numpy as np build model in input shape 10 x dense 32 in out dense 2 x compile model model input in output out model compile optimizer adam loss mse create dummy input data fake datum np random uniform low 0 high 1 0 size 1 10 while true repeatedly predict model predict fake datum no memory leak if this line be replace with pass
tensorflowtensorflow
stop inherit tf keras model to build custom layer in tutorial and guide
Bug
url s with the issue encoder bahdanauattention and decoder inherit tf keras model variable and optimizer model do not need to be a subclass of tf keras model because we don t use model s utility method model compose layer the main class use when create a layer like thing which contain other layer be tf keras model implement one be do by inherit from tf keras model this be not accurate since tf 1 13 and standalone kera 2 3 0 there be other tutorial which use tf keras model when tf keras layers layer be sufficient description of issue what need change we should stop inherit tf keras model in tutorial when we don t use utility function of model clear description since tf1 13 and standalone kera 2 3 0 layer set as attribute of a layer be now track also write custom layer and model with keras say a model be just like a layer but with add training and serialization utility we don t need to inherit model unless we use utility method of model I think the idea like always extend model because model have more feature be not correct because utility of model work only with special subset of layer layer whose call receive only one input l1461 thus I think we should stop inherit tf keras model in tutorial when tf keras layers layer be enough my original question on stackoverflow as a context of this bug submit a pull request I m send a pull request to fix they
tensorflowtensorflow
error convert nmt sequence to sequence example to tflite model
Bug
I be follow this example it be work as it should be and save checkpoint I want to now convert this to a tf lite model follow this example convert a savedmodel or convert a concrete function here be what I be run to save and they convert python tflite input tensor tf constant 1 shape 64 39 tflite target tensor tf constant 1 shape 64 7 tflite enc hide tensor tf constant 1 shape 64 1024 export dir save model checkpoint f train step to save checkpoint f get concrete function tflite input tensor tflite target tensor tflite enc hide tensor tf save model save checkpoint export dir to save converter tf lite tfliteconverter from concrete function to save tflite model converter convert but I be get this error anaconda3 envs main lib python3 6 site package tensorflow core python keras engine base layer py in call self input args kwargs 845 output base layer util mark as return output acd 846 else 847 output call fn cast input args kwargs 848 849 except error operatornotallowedingrapherror as e anaconda3 envs main lib python3 6 site package tensorflow core python autograph impl api py in wrapper args kwargs 290 def wrapper args kwargs 291 with ag ctx controlstatusctx status ag ctx status disable 292 return func args kwargs 293 294 if inspect isfunction func or inspect ismethod func typeerror call miss 2 require positional argument hide and enc output train with python tf function def train step inp targ enc hide loss 0 with tf gradienttape as tape enc output enc hide encoder inp enc hide dec hide enc hide dec input tf expand dim targ lang word index batch size 1 teacher force feed the target as the next input for t in range 1 targ shape 1 pass enc output to the decoder prediction dec hide decoder dec input dec hide enc output loss loss function targ t prediction use teacher force dec input tf expand dim targ t 1 batch loss loss int targ shape 1 variable encoder trainable variable decoder trainable variable gradient tape gradient loss variable optimizer apply gradient zip gradient variable return batch loss epoch 3 for epoch in range epoch start time time enc hide encoder initialize hidden state total loss 0 for batch inp targ in enumerate dataset take step per epoch batch loss train step inp targ enc hide total loss batch loss if batch 100 0 print epoch batch loss 4f format epoch 1 batch batch loss numpy save checkpoint the model every 2 epoch if epoch 1 1 0 checkpoint save file prefix checkpoint prefix print epoch loss 4f format epoch 1 total loss step per epoch print time take for 1 epoch sec n format time time start somehow the parameter for the decoder call be not be pass in python class decoder tf keras model def init self vocab size embed dim dec unit batch sz super decoder self init self batch sz batch sz self dec unit dec unit self embed tf keras layer embed vocab size embed dim self rnn tf keras layers gru self dec unit return sequence true return state true recurrent initializer glorot uniform unroll true self fc tf keras layer dense vocab size use for attention self attention bahdanauattention self dec unit def call self x hide enc output enc output shape batch size max length hide size context vector attention weight self attention hide enc output x shape after pass through embed batch size 1 embed dim x self embed x x shape after concatenation batch size 1 embed dim hide size x tf concat tf expand dim context vector 1 x axis 1 pass the concatenate vector to the gru output state self rnn x output shape batch size 1 hide size output tf reshape output 1 output shape 2 output shape batch size vocab x self fc output return x state attention weight I understand there may be some trouble convert the gru layer but I will tackle that next this seem to blow up before it even can check if gru be able to be convert
tensorflowtensorflow
basic tutorial from tensorflow 2 0 no long run on small machine
Bug
system information tensorflow basic tutorial code from python version 3 7 4 standard yum repository compile by gcc 7 3 1 20180712 red hat 7 3 1 6 on linux amazon free tier ec2 node run amazon linux 2 pip instal version of tensorflow version 2 0 0 describe the current behavior die due to memory constraint on tensorflow 2 0 0 but not in 1 14 0 describe the expect behavior with tensorflow 1 14 0 the tutorial work just fine but with 2 0 0 it run out of memory this be a small machine that amazon have on it s free tier t2 micro it come with 1 gb of ram and I m not expect it to run anything really large but it s an ideal machine from a cost perspective to try out tensorflow basic and it work just fine with tensorflow 1 14 and early code to reproduce the issue just the tensorflow tutorial code directly from the tensorflow website ec2 user ip xxx xxx xxx xxx cat test tf py usr bin env python3 import tensorflow as tf mnist tf keras datasets mnist x train y train x test y test mnist load datum x train x test x train 255 0 x test 255 0 model tf keras model sequential tf keras layer flatten input shape 28 28 tf keras layer dense 512 activation tf nn relu tf keras layers dropout 0 2 tf keras layer dense 10 activation tf nn softmax model compile optimizer adam loss sparse categorical crossentropy metric accuracy model fit x train y train epoch 5 model evaluate x test y test other info log output from test tf py ec2 user ip xxx xxx xxx xxx test tf py 2019 10 02 13 30 43 682116 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 10 02 13 30 43 775079 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2400075000 hz 2019 10 02 13 30 43 778603 I tensorflow compiler xla service service cc 168 xla service 0x3798a90 execute computation on platform host device 2019 10 02 13 30 43 778635 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 10 02 13 30 43 960286 w tensorflow core framework cpu allocator impl cc 81 allocation of 376320000 exceed 10 of system memory segmentation fault
tensorflowtensorflow
tf2 stub for intellisense current way tf2 import module do not support intellisense
Bug
system information tensorflow version you be use 2 0 0 be you willing to contribute it yes no happy to help out describe the feature and the current behavior state intellisense do not work because of the way tf2 have import statement image will this change the current api how add stub who will benefit with this feature vscode user any other info discuss here issuecomment 537143000
tensorflowtensorflow
tf io gfile mkdir restrict directory mode permission
Bug
system information have I write custom code yes os platform and distribution linux ubuntu 18 04 tensorflow instal from binary tensorflow version v1 14 0 rc1 22 gaf24dc9 1 14 0 python version 3 6 describe the current behavior directory create with tf io gfile mkdir on linux do not have the w mode bit set for group and other even if allow by the umask and acl describe the expect behavior directory create with tf io gfile mkdir should have the maximum permission allow by the umask and acl which be the way os mkdir in python work I think this behavior be cause by the fact that tf always call mkdir with mode 0755 l281 while python call mkdir with mode 511 777 in octal l1094 if no mode be give code to reproduce the issue import tensorflow as tf import stat import os os umask 0000 tf dir test1 os dir test2 tf io gfile mkdir tf dir tf mode os stat tf dir st mode os mkdir os dir os mode os stat os dir st mode if tf mode os mode print file mode differ print tf os format stat filemode tf mode stat filemode os mode
tensorflowtensorflow
tensorflow 2 0 adadelta optimizer
Bug
please go to stack overflow for help and support if you open a github issue here be our policy 1 it must be a bug a feature request or a significant problem with documentation for small doc fix please send a pr instead 2 the form below must be fill out 3 it shouldn t be a tensorboard issue those go here here s why we have that policy tensorflow developer respond to issue we want to focus on work that benefit the whole community e g fix bug and add feature support only help individual github also notify thousand of people when issue be file we want they to see you communicate an interesting problem rather than be redirect to stack overflow system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory exact command to reproduce you can collect some of this information use our environment capture script you can obtain the tensorflow version with bash python c import tensorflow as tf print tf version git version tf version version describe the problem describe the problem clearly here be sure to convey here why it s a bug in tensorflow or a feature request source code log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach try to provide a reproducible test case that be the bare minimum necessary to generate the problem
tensorflowtensorflow
image load error
Bug
I m use tf 2 0 0 rc1 in the process to load image you say for image label in label ds take 1 print image shape image numpy shape print label label numpy invalidargumenterror function node inference dataset map process path 378 slice index 1 of dimension 0 out of bound node stride slice op iteratorgetnextsync and then train ds prepare for training label ds image batch label batch next iter train ds invalidargumenterror function node inference dataset map process path 378 slice index 1 of dimension 0 out of bound node stride slice op iteratorgetnextsync some kind of problem
tensorflowtensorflow
documentation for stream training datum from disk
Bug
I think we need more detailed description and example for stream training datum from disk on basic mechanic it be mention on other document but no how tos it would be helpful if we add how to implement streaming datum from disk and improvement on tf2 0 if any when iterate over training datum that fit in memory feel free to use regular python iteration otherwise tf datum dataset be the good way to stream training datum from disk for large dataset 1 gb this can waste memory and run into byte limit of graph serialization if tensor contain one or more large numpy array consider the alternative describe in this guide from tensor slice the link to this guide be break the tf datum api support a variety of file format so that you can process large dataset that do not fit in memory for example the tfrecord file format be a simple record orient binary format that many tensorflow application use for training datum the tf datum tfrecorddataset class enable you to stream over the content of one or more tfrecord file as part of an input pipeline consume tfrecord datum also it would be helpful if we make it clear if the tf data tfrecorddataset be the only class that support streaming if the tfrecord be the only file format that support streaming if the user need to convert their dataset to tfrecord format if the train model can be use with tflite etc
tensorflowtensorflow
upgrade your code to tensorflow 2 0 be behind a medium paywall
Bug
url s with the issue description of issue what need change this upgrade guide be link from the tensorflow 2 0 release note if the viewer have exceed their quota on medium they be block by a paywall and can not read the upgrade guide be this intentional image
tensorflowtensorflow
malforme documentation for tf string split
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change last section be raw markdown instead of format html clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide
tensorflowtensorflow
no documentation for tf string reduce join
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change add documentation for this method clear description correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide
tensorflowtensorflow
wrong link in api document and search result
Bug
item in tensorflow core r1 14 be link to r2 0 in search result all information be correct except the link all link point to rc version not r1 14 I check tf nn dynamic rnn and tf nn fuse batch norm dynamic rnn be link to now but be right
tensorflowtensorflow
set tflite interpreter gpu delegate on android it s output be different without gpu delegate
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 android mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device mi pad 4 plus tensorflow instal from source or binary source tensorflow version use command below 1 14 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior set tflite interpreter gpu delegate on android it s output be different without gpu delegate and it seem that whatever datum I feed the model output be the same be it something wrong with my code and I have check my code it s output be correct without set the gpu delegate describe the expect behavior use gpu to accelerate inference code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem here be the code I set the tflite interpreter option tflitemodel loadmodelfile activity switch device case nnapi tfliteoption setusennapi true break case gpu gpudelegate new gpudelegate tfliteoption adddelegate gpudelegate break case cpu break tfliteoption setnumthread numthread tflite new interpreter tflitemodel tfliteoption and here be the code I use to inference where the input be a four dim array and output be a two dim array correspond to the model input and output dimention tflite run audiodata output other info log 14 11 28 433 30654 30654 I tflite create tensorflow lite delegate for gpu 2019 09 30 14 11 28 435 30654 30654 I tflite initialize tensorflow lite runtime 2019 09 30 14 11 28 464 30654 30654 I adreno qualcomm build dcd4b96 i568c71768a build date 04 30 18 opengl es shader compiler version ev031 22 00 01 06 local branch remote branch quic gfx adreno lnx 1 0 r33 rel remote branch none reconstruct branch nothing 2019 09 30 14 11 28 467 30654 30654 I adreno pfp 0x005ff087 I 0x005ff063 2019 09 30 14 11 28 471 30654 30654 I zygote64 android hardware configstore v1 0 isurfaceflingerconfig haswidecolordisplay retrieve 0 2019 09 30 14 11 28 472 30654 30654 e libegl call to opengl es api with no current context log once per thread 2019 09 30 14 11 28 659 3275 3275 d eventbus 3275 u0 send apptransitionfinishedevent 2019 09 30 14 11 28 659 3275 3275 d eventbus 3275 u0 forcedresizableinfoactivitycontroller 0xe31f776 p1 onbusevent apptransitionfinishedevent 2019 09 30 14 11 28 659 3275 3275 d eventbus 3275 u0 onbusevent apptransitionfinishedevent duration 17 microsecond avg 159 2019 09 30 14 11 28 659 3275 3275 d eventbus 3275 u0 send apptransitionfinishedevent 2019 09 30 14 11 28 659 3275 3275 d eventbus 3275 u0 forcedresizableinfoactivitycontroller 0xe31f776 p1 onbusevent apptransitionfinishedevent 2019 09 30 14 11 28 659 3275 3275 d eventbus 3275 u0 onbusevent apptransitionfinishedevent duration 6 microsecond avg 159 2019 09 30 14 11 29 152 30654 30674 d openglrenderer hwui gl pipeline 2019 09 30 14 11 29 161 30654 30654 I toast show toast from oppackagename com example dm testing packagename com example dm testing 2019 09 30 14 11 29 201 30654 30674 I openglrenderer initialize egl version 1 4 2019 09 30 14 11 29 201 30654 30674 d openglrenderer swap behavior 2 2019 09 30 14 11 29 321 2629 2679 I activitymanager display com example dm testing mainactivity 1s153ms 2019 09 30 14 11 29 322 4049 4487 d powerkeeper event notifyactivitylaunchtime com example dm testing mainactivity totaltime 1153 2019 09 30 14 11 29 322 598 598 w system bin hwservicemanager gettransport can not find entry iiop default in either framework or device manifest 2019 09 30 14 11 29 322 2629 2679 e andr perf jni iop trygetservice fail 2019 09 30 14 11 29 360 2629 16635 d activitytrigger activitytrigger activitystoptrigger 2019 09 30 14 11 29 363 6198 6198 d launcher adpendantutil updateadvertisementpendantvisibility misadpendantenable false mispullactionenable false miseditdisable true mcurrentscreenid 2 mdefaultscreenid 2 mislaunchervisible false misminusonescreenshow false 2019 09 30 14 11 29 452 3880 8501 d com xiaomi common network http post response code 200 2019 09 30 14 11 31 156 768 815 e andr perf optshandler perf lock rel update sys class mmc host mmc0 clk scale enable with 1 return value 2
tensorflowtensorflow
model training do not improve accuracy
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 19 04 tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 rc2 python version 3 7 3 cuda cudnn version release 10 0 v10 0 130 gpu model and memory nvidia gtx 1080 ti describe the current behavior when attempt to train a sequential model on the mnist dataset the model remain at 11 accuracy this be only resolve when the input be scale by 255 describe the expect behavior the model should improve in accuracy code to reproduce the issue other info log please reference the issue at for more information this behavior be not observe in 2 0 0 beta0
tensorflowtensorflow
convolution2d transpose get an unexpected keyword argument kernel constraint
Bug
when I use tensorflow contrib layer convolution2d transpose use tensorflow1 13 gpu ubuntu I get this error convolution2d transpose get an unexpected keyword argument kernel constraint but I check the document there exit kernel constraint parameter in tensorflow contrib layers convolution2d transpose how to fix it
tensorflowtensorflow
break link for api development recommendation
Bug
this page link to which doesn t exist we encourage the community to develop and maintain support for other language with the approach recommend by the tensorflow maintainer
tensorflowtensorflow
tf 2 0 api docs tf nn batch normalization
Bug
url s with the issue description of issue what need change documentation clear description yes correct link yes be the link to the source code correct yes parameter define yes be all parameter define and format correctly no tf nn moment keep dim true in tf2 version of tf nn moment keep dim keyword should be keepdim instead do not implement the equation as give but the equation 11 in algorithm 2 of the paper image return define be return value define yes raise list and define no usage example be there a usage example no request visual if applicable no be there currently visual if not will it clarify the content no submit a pull request yes be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide
tensorflowtensorflow
typeerror an op outside of the function building code be be pass
Bug
I have realize a code to take previous lstm output as current input as a sequence generator but when I want to pass state of lstm cell to next batch by take state as a member of the generator class it rise an exception typeerror an op outside of the function building code be be pass when add tf function to main function in my code the exception become attributeerror tensor object have no attribute numpy I suspect this be about the compute graph but I can t understand the reason my tensorflow version be 2 0 0 rc1 in 4 tf version out 4 2 0 0 rc1 I don t know whether it be a bug or not thank you for your attention the code be python import tensorflow as tf from tensorflow import kera import numpy as np class rnngenerator keras layers layer def init self rnn dim rnn layer num seq len seq width batch size kwargs self rnn dim rnn dim lstm hidden dim self rnn layer num rnn layer num number of lstm cell layer self seq len seq len length of squence self seq width seq width width of squence self batch size batch size self cell super init kwargs def build self input shape input dense self dense in kera layer dense self rnn dim many lstm cell layer for I in range self rnn layer num cell keras layers lstmcell unit self rnn dim self cell I cell output dense self den out keras layer dense self seq width activation sigmoid super build input shape def call self input input tf squeeze input 1 init cell state state getattr self state none if state be none state for I cell in self cell item init cell states tf random uniform self batch size self rnn dim tf random uniform self batch size self rnn dim state I init cell state prev output as current input rand prev tf random uniform self batch size self seq width dtype tf float32 prev input tf concat rand prev input 1 output for in range self seq len cell input self dense in prev input for I cell in self cell item cell output state I cell cell input state I cell input cell output step output self den out cell output prev input tf concat step output input 1 output append tf expand dim step output 1 reserve cell state in current batch self state state return tf concat output 1 tf function def main batch size 17 seq width 4 rand input dim 3 input keras layer input shape 1 rand input dim output rnngenerator rnn dim 64 rnn layer num 2 seq len 10 seq width seq width batch size batch size input output keras layer flatten output output keras layer dense 1 output output tf nn sigmoid outputs model keras model model inputs input output output model summary x np random rand batch size 1 rand input dim astype np float32 y np zero batch size 1 model compile loss sparse categorical crossentropy optimizer sgd metric accuracy model fit x y batch size 32 epoch 10 if name main main running info without tf function before main bash 2019 09 28 21 43 30 966344 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 09 28 21 43 30 980391 I tensorflow compiler xla service service cc 168 xla service 0x7f88282a2f60 execute computation on platform host device 2019 09 28 21 43 30 980415 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version train on 17 sample epoch 1 10 warning tensorflow from user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python ops math grad py 1394 where from tensorflow python op array op be deprecate and will be remove in a future version instruction for update use tf where in 2 0 which have the same broadcast rule as np where 17 17 2s 98ms sample traceback most recent call last file user xxxxxx desktop gan seq test py line 85 in main file user xxxxxx desktop gan seq test py line 81 in main model fit x y batch size 32 epoch 10 file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras engine training py line 728 in fit use multiprocesse use multiprocesse file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras engine training v2 py line 324 in fit total epoch epoch file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras engine training v2 py line 123 in run one epoch batch out execution function iterator file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras engine training v2 util py line 86 in execution function distribute function input fn file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager def function py line 457 in call result self call args kwd file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager def function py line 520 in call return self stateless fn args kwd file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager function py line 1823 in call return graph function filter call args kwargs pylint disable protect access file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager function py line 1141 in filter call self capture input file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager function py line 1224 in call flat ctx args cancellation manager cancellation manager file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager function py line 511 in call ctx ctx file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager execute py line 76 in quick execute raise e file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager execute py line 61 in quick execute num output typeerror an op outside of the function building code be be pass a graph tensor it be possible to have graph tensor leak out of the function building context by include a tf init scope in your function build code for example the follow function will fail tf function def have init scope my constant tf constant 1 with tf init scope add my constant 2 the graph tensor have name model rnn generator lstm cell 10 mul 2 0 run info with tf function before main bash 2019 09 28 21 48 39 227708 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 09 28 21 48 39 242540 I tensorflow compiler xla service service cc 168 xla service 0x7fdda3d85620 execute computation on platform host device 2019 09 28 21 48 39 242575 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version warning tensorflow from user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python ops math grad py 1424 where from tensorflow python op array op be deprecate and will be remove in a future version instruction for update use tf where in 2 0 which have the same broadcast rule as np where train on 17 sample epoch 1 10 traceback most recent call last file user xxxxxx desktop gan seq test py line 85 in main file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager def function py line 457 in call result self call args kwd file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager def function py line 503 in call self initialize args kwd add initializer to initializer map file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager def function py line 408 in initialize args kwd file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager function py line 1848 in get concrete function internal garbage collect graph function self maybe define function args kwargs file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager function py line 2150 in maybe define function graph function self create graph function args kwargs file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager function py line 2041 in create graph function capture by value self capture by value file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python framework func graph py line 915 in func graph from py func func output python func func args func kwargs file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python eager def function py line 358 in wrap fn return weak wrap fn wrap args kwd file user xxxxxx anaconda3 envs tf2 lib python3 6 site package tensorflow core python framework func graph py line 905 in wrapper raise e ag error metadata to exception e attributeerror in convert code relative to user xxxxxx desktop gan seq test py 81 main model fit x y batch size 32 epoch 10 anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras engine training py 728 fit use multiprocesse use multiprocesse anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras engine training array py 674 fit step name step per epoch anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras engine training array py 393 model iteration batch out f in batch anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras backend py 3635 call x numpy for x in output pylint disable protect access anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras backend py 3635 x numpy for x in output pylint disable protect access attributeerror tensor object have no attribute numpy
tensorflowtensorflow
image classification code sample error
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change acc history history accuracy val acc history history val accuracy should be acc history history acc val acc history history val acc clear description when I copy and paste the code and run it directly the model be train for a few minute a while but I get keyerror accuracy and then when I change that get keyerror val accuracy someone would use this to perform image recognition with code they take directly from the tensorflow doc on this page correct link it be not correct yet I be think of make a pr to the doc repo parameter define n a return define n a raise list and define n a usage example to train image request visual if applicable there s a few submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide I be
tensorflowtensorflow
tf image extract patch bug incorrect value
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 docker build from tensorflow tensorflow 2 0 0rc0 gpu py3 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary docker build from tensorflow tensorflow 2 0 0rc0 gpu py3 tensorflow version use command below 2 0 0rc0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 gpu model and memory describe the current behavior I be use tf image extract patch to get patch of an image read by cv2 I pad the image with zero and so I notice that the last extract patch after reshape the output have unexpected value at the very end last pixel of the last row they look like come from another part of the original image all the other patch look quite all right I also try retype the image to float32 before it go to extract patch didn t help code to reproduce the issue def image to patch image patch size stride target width image shape 1 patch size stride 1 stride patch size 1 target height image shape 0 patch size stride 1 stride patch size 1 image np pad image 0 target height image shape 0 0 target width image shape 1 0 0 here the last row of image be all zero batch image np expand dim image 0 patch tf image extract patch image batch image size 1 patch size patch size 1 stride 1 stride stride 1 rate 1 1 1 1 padding valid name none patch np array patch patch np resize patch patch shape 0 patch shape 1 patch shape 2 1 patch np resize patch patch shape 0 patch size patch size image shape 1 return patch
tensorflowtensorflow
deprecation warn when train a simple embed with custom function
Bug
have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 13 6 tensorflow instal from source or binary pip install tensorflow 2 0 0 beta1 tensorflow version use command below v2 0 0 beta1 5101 gc75bb66a99 2 0 0 rc0 python version v3 6 7 6ec5cf24b7 oct 20 2018 03 02 14 describe the current behavior the follow code train a simple embedding on temporal index z to match a give time series x import tensorflow as tf from tensorflow keras import model loss optimizer from tensorflow keras layer import embed import numpy as np def extract indice x extract the line of x give by the index if indice shape 4 and x shape 20 5 then the output have shape 4 5 indice tf reshape index 1 1 return tf gather nd x indice class test model def init self x kwargs super test self init kwargs self x x self optimizer optimizer adam learning rate 1e 3 self loss loss categoricalcrossentropy self z embed x shape 0 x shape 1 def call self input training none input be temporal index z t self z input return z t def custom train self tf function def train step index with tf gradienttape as tape z t self index training true x t extract index self x loss self loss x t z t grad tape gradient loss self trainable variable self optimizer apply gradient zip grad self trainable variable return this be a toy exemple we simply make one training step on the index 0 train step tf constant 0 return t 20 n 5 x np random rand t n model test x x model custom train and I get the follow warn about the use of a deprecate method warn log before flag parsing go to stderr w0927 19 04 48 957805 140735485318016 deprecation py 323 from path to python3 6 site package tensorflow core python ops math grad py 1394 where from tensorflow python op array op be deprecate and will be remove in a future version instruction for update use tf where in 2 0 which have the same broadcast rule as np where this remind I about an other issue I open with a lot of deprecation warning arise when I use the api this issue be close but the warning be still here and in fact the release of the rc0 have bring a new one see my last comment there
tensorflowtensorflow
matmul matvec einsum bug when dim exceed 2 16
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below git v1 12 1 9365 gff401a6 tf 1 15 0 dev20190821 python version 3 6 8 cuda cudnn version 10 0 6 0 21 also test on 10 1 7 1 4 gtx 10708 gb gpu model and memory quadro 2 gb describe the current behavior gpu float32 implementation of tf matmul tf matvec and tf einsum all fail when apply to tensor with a dimension 2 16 65536 though not necessarily the summation dimension could be relate to this of particular note the example below show the bug as a result of sum over a dimension of size 2 and input of typical size even one so be not a case of overflow of the individual entry the issue do not appear when use tf float64 or cpu implementation result be not deterministic at least not when the large dimension be 2 16 1 or more and use tf linalg matvec implementation thoguh they do appear stable on at 2 16 the tf linalg matvec implementation seem to converge to the the hacky tf matmul version thoguh that converge value be different to the einsum value the issue do not appear in this example when run in graph mode though I believe it still do occur in other circumstance the more complicated example I extract this from be not resolve by run in graph mode but I can not produce a simple example that exhibit failure in graph mode this may be because of the non determinism discuss above describe the expect behavior consistency of implementation between cpu gpu correct value code to reproduce the issue python import numpy as np import tensorflow as tf np random seed 123 def matvec prim np a b numpy implementation of tf linalg matvec a b transpose a true return np sum a np expand dim b axis 1 axis 2 def matvec einsum np a b return np einsum ijk ij ik a b def matvec linalg a b return tf linalg matvec a b transpose a true def matvec matmul a b return tf squeeze tf linalg matmul a tf expand dim b axis 1 transpose a true axis 1 def matvec prim a b return tf reduce sum a tf expand dim b axis 1 axis 2 def matvec einsum a b a tf convert to tensor a b tf convert to tensor b return tf einsum ijk ij ik a b def max err x y if hasattr x numpy x x numpy if hasattr y numpy y y numpy return np max np abs x y def report n value one dtype np float64 device gpu 0 shape n 2 2 n 2 if value one a b np one s dtype dtype for s in shape elif value uniform a b np random uniform size s astype dtype dtype for s in shape elif value normal a b np random normal size s astype dtype dtype for s in shape else raise valueerror value must be one uniform or normal get format value a extent np min a np max a b extent np min b np max b base matvec prim np a b name matvec einsum np val matvec einsum np a b fns matvec prim matvec linalg matvec matmul matvec einsum a tf constant a dtype dtype b tf constant b dtype dtype call each function twice to demonstrate non determinism of linalg matvec with tf device device tf val tuple fn a b for fn in fns tuple fn a b for fn in fns name extend fn name for fn in fns name extend fn name for fn in fns if tf execute eagerly tf val v numpy for v in tf val else with tf session as sess tf val sess run tf val val extend tf val print format n dtype name value device print a extent format a extent print b extent format b extent for name value in zip name val print 20 format name max err base value work 65535 just too big 65536 even big 65537 dtype np float64 tf compat v1 enable eager execution for device in cpu 0 gpu 0 for dtype in np float64 np float32 for n in work just too big even big for value in one uniform normal report n dtype dtype value value device device other info log all result in the above be good except for n 65536 dtype tf float32 and device gpu 0 those log be below 65535 float32 normal gpu 0 a extent 4 471484 4 361648 b extent 5 123302 4 7598076 matvec einsum np 0 0 matvec prim 0 0 matvec linalg 0 0 matvec matmul 0 0 matvec einsum 0 0 matvec prim 0 0 matvec linalg 0 0 matvec matmul 0 0 matvec einsum 0 0 65536 float32 one gpu 0 a extent 1 0 1 0 b extent 1 0 1 0 matvec einsum np 0 0 matvec prim 0 0 matvec linalg 1 0 matvec matmul 2 0 matvec einsum 1 0 matvec prim 0 0 matvec linalg 1 0 matvec matmul 2 0 matvec einsum 1 0 65536 float32 uniform gpu 0 a extent 1 0896997e 06 0 9999953 b extent 1 2711744e 06 0 99999875 matvec einsum np 0 0 matvec prim 0 0 matvec linalg 0 24248471856117249 matvec matmul 0 3159205913543701 matvec einsum 0 4631575047969818 matvec prim 0 0 matvec linalg 0 24248471856117249 matvec matmul 0 3159205913543701 matvec einsum 0 4631575047969818 65536 float32 normal gpu 0 a extent 4 715384 4 746153 b extent 4 523676 4 59162 matvec einsum np 0 0 matvec prim 0 0 matvec linalg 0 6585559248924255 matvec matmul 0 709877073764801 matvec einsum 0 4629881978034973 matvec prim 0 0 matvec linalg 0 6585559248924255 matvec matmul 0 709877073764801 matvec einsum 0 4629881978034973 65537 float32 one gpu 0 a extent 1 0 1 0 b extent 1 0 1 0 matvec einsum np 0 0 matvec prim 0 0 matvec linalg 4 812917232513428 matvec matmul 2 0 matvec einsum 1 0 matvec prim 0 0 matvec linalg 2 0 matvec matmul 2 0 matvec einsum 1 0 65537 float32 uniform gpu 0 a extent 6 188006e 07 0 9999941 b extent 5 5208016e 06 0 9999933 matvec einsum np 0 0 matvec prim 0 0 matvec linalg 2 9982364177703857 matvec matmul 0 4082830250263214 matvec einsum 0 40019217133522034 matvec prim 0 0 matvec linalg 0 4082830250263214 matvec matmul 0 4082830250263214 matvec einsum 0 40019217133522034 65537 float32 normal gpu 0 a extent 5 1086445 4 5197787 b extent 4 8438015 3 928705 matvec einsum np 0 0 matvec prim 0 0 matvec linalg 4 819068908691406 same function same input matvec matmul 2 1483089923858643 matvec einsum 2 5970025062561035 matvec prim 0 0 matvec linalg 2 1483089923858643 same function same input matvec matmul 2 1483089923858643 matvec einsum 2 5970025062561035
tensorflowtensorflow
combo tpu tfrecord for model evaluate be not work
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colab tensorflow instal from source or binary tensorflow version use command below 1 14 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory tpu colab describe the current behavior on a tpu colab running model evaluate on a tf datum dataset build with tfrecord throw compilation failure dynamic spatial reduce window be not support reduce window 21 f32 1 127 127 3 3 2 1 0 reduce window f32 1 256 256 3 3 2 1 0 reshape 12 f32 constant 16 window size 1x3x3x1 stride 1x2x2x1 to apply max f32 17 metadata op type maxpool op name max pooling2d 4 maxpool tpu compilation fail the fit method work perfectly with the same dataset evaluation work perfectly if I rebuild the model and load the weight on a cpu gpu instance I don t have this issue on tpu if the tf datum dataset be not build from tfrecord describe the expect behavior model evaluate should work and provide a result close from the last fit iteration code to reproduce the issue put it on colab and replace google bucket to define by a real bucket 2 occurrence import tensorflow as tf import numpy as np get a image as input datum for model tensorflow logo curl output tensor logo png build 8 tfrecord base on the logo download def build tf record def serialize example pyfunction image label feature image tf train feature byte list tf train byteslist value image numpy label tf train feature int64 list tf train int64list value label numpy example proto tf train example feature tf train feature feature feature return example proto serializetostring def tf serialize example image label tf string tf py function serialize example pyfunction image label pass these args to the above function tf string the return type be tf string return tf reshape tf string the result be a scalar dataset tf datum dataset from tensor slice content tensor logo png for x in range 8 dataset dataset map lambda x tf read file x 0 dataset dataset map tf serialize example return dataset write tf record to disc and move on a google bucket with tf session as sess filename tfrecord test writer tf datum experimental tfrecordwriter filename writting writer write build tf record sess run writting ls from google colab import auth auth authenticate user gsutil cp tfrecord test gs google bucket to define the aim be to quickly build 8 item x y 0 with x of shape 256 256 3 def train input fn dummy datum dataset tf datum dataset from tensor slice 0 for x in range 8 dataset dataset map lambda x tf random normal 256 256 3 0 dataset dataset batch 8 return dataset def train input fn from tf record create a description of the feature feature description image tf fixedlenfeature tf string label tf fixedlenfeature tf int64 default value 0 def parse function example proto parse the input tf example proto use the dictionary above return tf parse single example example proto feature description def process string image dic image string dic image image decode tf image decode png image string channel 3 image decode tf cast image decode tf float32 255 return image decode tf cast dic label tf int32 list file tf datum dataset list file gs google bucket to define tfrecord test raw tfrecord tf datum tfrecorddataset list file file as dict raw tfrecord map parse function file file as dict map process string image file file batch 8 drop remainder true return file basic check to compare train input fn dummy datum train input fn from tf record can t be run after tpu initialisation with tf session as sess batch train input fn dummy datum make one shot iterator get next while true try record sess run batch print shape of dummy item record 0 shape record 1 shape except tf error outofrangeerror break batch train input fn from tf record make one shot iterator get next while true try record sess run batch print shape of tfrecord item record 0 shape record 1 shape except tf error outofrangeerror break shape of dummy item 8 256 256 3 8 1 shape of tfrecord item 8 256 256 3 8 1 initialize tpu only once if not strategy in global resolver tf contrib cluster resolver tpuclusterresolver tf contrib distribute initialize tpu system resolver strategy tf contrib distribute tpustrategy resolver build model and compile with strategy scope input tf keras layers input shape 256 256 3 x tf keras layer maxpooling2d 3 3 stride 2 2 input output tf keras layer globalaveragepooling2d x output tf keras layer dense 1 activation sigmoid output model tf keras model inputs input output output model compile adam loss binary crossentropy metric binary accuracy model summary fit the model no issue print train dummy model fit train input fn dummy datum epoch 1 step per epoch 1 print train tfrecord model fit train input fn from tf record epoch 1 step per epoch 1 print evaluate dummy model evaluate train input fn dummy datum step 1 print evaluate tfrecord model evaluate train input fn from tf record step 1 train dummy warning tensorflow expect a shuffle dataset but input dataset x be not shuffle please invoke shuffle on input dataset warn tensorflow from usr local lib python3 6 dist package tensorflow python keras engine training distribute py 411 variable load from tensorflow python op variable be deprecate and will be remove in a future version instruction for update prefer variable assign which have equivalent behavior in 2 x 1 1 0s 397m step loss 0 3180 binary accuracy 1 0000 train tfrecord 1 1 1s 786ms step loss 0 4993 binary accuracy 1 0000 evaluate dummy 1 1 1s 1s step 1 1 1s 1s step evaluate tfrecord unimplementederror traceback most recent call last usr local lib python3 6 dist package tensorflow python client session py in do call self fn args 1355 try 1356 return fn args 1357 except error operror as e 10 frame unimplementederror from job worker replica 0 task 0 compilation failure dynamic spatial reduce window be not support reduce window 21 f32 1 127 127 3 3 2 1 0 reduce window f32 1 256 256 3 3 2 1 0 reshape 12 f32 constant 16 window size 1x3x3x1 stride 1x2x2x1 to apply max f32 17 metadata op type maxpool op name max pooling2d 4 maxpool tpu compilation fail node tpureplicatemetadata 3 during handling of the above exception another exception occur other info log the fit method work perfectly with the same dataset evaluation work perfectly if I rebuild the model and load the weight on a cpu gpu instance I don t have this issue on tpu if the tf datum dataset be not build from tfrecord full stacktrace unimplementederror traceback most recent call last usr local lib python3 6 dist package tensorflow python client session py in do call self fn args 1355 try 1356 return fn args 1357 except error operror as e 10 frame usr local lib python3 6 dist package tensorflow python client session py in run fn feed dict fetch list target list option run metadata 1340 return self call tf sessionrun 1341 option feed dict fetch list target list run metadata 1342 usr local lib python3 6 dist package tensorflow python client session py in call tf sessionrun self option feed dict fetch list target list run metadata 1428 self session option feed dict fetch list target list 1429 run metadata 1430 unimplementederror from job worker replica 0 task 0 compilation failure dynamic spatial reduce window be not support reduce window 21 f32 1 127 127 3 3 2 1 0 reduce window f32 1 256 256 3 3 2 1 0 reshape 12 f32 constant 16 window size 1x3x3x1 stride 1x2x2x1 to apply max f32 17 metadata op type maxpool op name max pooling2d 4 maxpool tpu compilation fail node tpureplicatemetadata 3 during handling of the above exception another exception occur unimplementederror traceback most recent call last in 6 model evaluate train input fn dummy datum step 1 7 print evaluate tfrecord 8 model evaluate train input fn from tf record step 1 usr local lib python3 6 dist package tensorflow python keras engine train py in evaluate self x y batch size verbose sample weight step callback max queue size worker use multiprocesse 902 sample weight sample weight 903 step step 904 callback callback 905 906 batch size self validate or infer batch size batch size step x usr local lib python3 6 dist package tensorflow python keras engine training distribute py in evaluate distribute model x y batch size verbose sample weight step callback 168 if distribute training util be tpu strategy model distribution strategy 169 return experimental tpu test loop 170 model dataset verbose verbose step step callback callback 171 else 172 return training array test loop usr local lib python3 6 dist package tensorflow python keras engine training distribute py in experimental tpu test loop model dataset verbose step callback 562 callback call batch hook mode begin current step batch log 563 try 564 batch out k batch get value test op output tensor 565 except error outofrangeerror 566 warning msg make sure that your dataset can generate at least usr local lib python3 6 dist package tensorflow python keras backend py in batch get value tensor 3008 raise runtimeerror can not get value inside tensorflow graph function 3009 if tensor 3010 return get session tensor run tensor 3011 else 3012 return usr local lib python3 6 dist package tensorflow python client session py in run self fetch feed dict option run metadata 948 try 949 result self run none fetch feed dict option ptr 950 run metadata ptr 951 if run metadata 952 proto datum tf session tf getbuffer run metadata ptr usr local lib python3 6 dist package tensorflow python client session py in run self handle fetch feed dict option run metadata 1171 if final fetch or final target or handle and feed dict tensor 1172 result self do run handle final target final fetch 1173 feed dict tensor option run metadata 1174 else 1175 result usr local lib python3 6 dist package tensorflow python client session py in do run self handle target list fetch list feed dict option run metadata 1348 if handle be none 1349 return self do call run fn feed fetch target option 1350 run metadata 1351 else 1352 return self do call prun fn handle feed fetch usr local lib python3 6 dist package tensorflow python client session py in do call self fn args 1368 pass 1369 message error interpolation interpolate message self graph 1370 raise type e node def op message 1371 1372 def extend graph self unimplementederror from job worker replica 0 task 0 compilation failure dynamic spatial reduce window be not support reduce window 21 f32 1 127 127 3 3 2 1 0 reduce window f32 1 256 256 3 3 2 1 0 reshape 12 f32 constant 16 window size 1x3x3x1 stride 1x2x2x1 to apply max f32 17 metadata op type maxpool op name max pooling2d 4 maxpool tpu compilation fail node tpureplicatemetadata 3 define at 8 original stack trace for tpureplicatemetadata 3 file usr lib python3 6 runpy py line 193 in run module as main main mod spec file usr lib python3 6 runpy py line 85 in run code exec code run global file usr local lib python3 6 dist package ipykernel launcher py line 16 in app launch new instance file usr local lib python3 6 dist package traitlet config application py line 658 in launch instance app start file usr local lib python3 6 dist package ipykernel kernelapp py line 477 in start ioloop ioloop instance start file usr local lib python3 6 dist package tornado ioloop py line 832 in start self run callback self callback popleft file usr local lib python3 6 dist package tornado ioloop py line 605 in run callback ret callback file usr local lib python3 6 dist package tornado stack context py line 277 in null wrapper return fn args kwargs file usr local lib python3 6 dist package zmq eventloop zmqstream py line 536 in self io loop add callback lambda self handle event self socket 0 file usr local lib python3 6 dist package zmq eventloop zmqstream py line 450 in handle event self handle recv file usr local lib python3 6 dist package zmq eventloop zmqstream py line 480 in handle recv self run callback callback msg file usr local lib python3 6 dist package zmq eventloop zmqstream py line 432 in run callback callback args kwargs file usr local lib python3 6 dist package tornado stack context py line 277 in null wrapper return fn args kwargs file usr local lib python3 6 dist package ipykernel kernelbase py line 283 in dispatcher return self dispatch shell stream msg file usr local lib python3 6 dist package ipykernel kernelbase py line 235 in dispatch shell handler stream ident msg file usr local lib python3 6 dist package ipykernel kernelbase py line 399 in execute request user expression allow stdin file usr local lib python3 6 dist package ipykernel ipkernel py line 196 in do execute res shell run cell code store history store history silent silent file usr local lib python3 6 dist package ipykernel zmqshell py line 533 in run cell return super zmqinteractiveshell self run cell args kwargs file usr local lib python3 6 dist package ipython core interactiveshell py line 2718 in run cell interactivity interactivity compiler compiler result result file usr local lib python3 6 dist package ipython core interactiveshell py line 2828 in run ast node if self run code code result file usr local lib python3 6 dist package ipython core interactiveshell py line 2882 in run code exec code obj self user global ns self user n file line 8 in model evaluate train input fn from tf record step 1 file usr local lib python3 6 dist package tensorflow python keras engine training py line 904 in evaluate callback callback file usr local lib python3 6 dist package tensorflow python keras engine training distribute py line 170 in evaluate distribute model dataset verbose verbose step step callback callback file usr local lib python3 6 dist package tensorflow python keras engine training distribute py line 520 in experimental tpu test loop test step fn args test input datum file usr local lib python3 6 dist package tensorflow python distribute tpu strategy py line 249 in experimental run v2 return tpu run self fn args kwargs file usr local lib python3 6 dist package tensorflow python distribute tpu strategy py line 196 in tpu run maximum shape maximum shape file usr local lib python3 6 dist package tensorflow python tpu tpu py line 592 in replicate maximum shape maximum shape 1 file usr local lib python3 6 dist package tensorflow python tpu tpu py line 854 in split compile and replicate num replicas num replicas use tpu use tpu metadata kwargs file usr local lib python3 6 dist package tensorflow python ops gen tpu op py line 6039 in tpu replicate metadata name name file usr local lib python3 6 dist package tensorflow python framework op def library py line 788 in apply op helper op def op def file usr local lib python3 6 dist package tensorflow python util deprecation py line 507 in new func return func args kwargs file usr local lib python3 6 dist package tensorflow python framework op py line 3616 in create op op def op def file usr local lib python3 6 dist package tensorflow python framework op py line 2005 in init self traceback tf stack extract stack
tensorflowtensorflow
tflite conversion of tf keras model fail
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary pip install tensorflow version use command below v2 0 0 beta1 0 g8e423e3d56 2 0 0 beta1 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source 7 4 0 cuda cudnn version no gpu model and memory nvidia titan xp describe the current behavior import tensorflow as tf model tf keras model load model keras model h5 model summary model model layer type output shape param connect to input 1 inputlayer none 1048576 0 embed embed none 1048576 8 2056 input 1 0 0 conv1d conv1d none 2097 128 512128 embed 0 0 conv1d 1 conv1d none 2097 128 512128 embed 0 0 multiply multiply none 2097 128 0 conv1d 0 0 conv1d 1 0 0 global max pooling1d globalmax none 128 0 multiply 0 0 dense dense none 128 16512 global max pooling1d 0 0 dense 1 dense none 1 129 dense 0 0 total param 1 042 953 trainable param 1 042 953 non trainable param 0 converter tf lite tfliteconverter from keras model model the converter fail to convert the model converter convert 2019 09 26 14 39 27 048354 I tensorflow core grappler device cc 60 number of eligible gpu core count 8 compute capability 0 0 0 note tensorflow be not compile with cuda support 2019 09 26 14 39 27 048553 I tensorflow core grappler cluster single machine cc 359 start new session 2019 09 26 14 39 27 065544 I tensorflow core grappler optimizer meta optimizer cc 716 optimization result for grappler item graph to optimize 2019 09 26 14 39 27 066324 I tensorflow core grappler optimizer meta optimizer cc 718 function optimizer function optimizer do nothing time 0 002ms 2019 09 26 14 39 27 066655 I tensorflow core grappler optimizer meta optimizer cc 718 function optimizer function optimizer do nothing time 0ms traceback most recent call last file home sridhar pe csv malenv3 lib python3 6 site package tensorflow python framework importer py line 427 in import graph def graph c graph serialize option pylint disable protect access tensorflow python framework error impl invalidargumenterror input 0 of node model embed embed lookup be pass float from model embed embed lookup read readvariableop resource 0 incompatible with expect resource during handling of the above exception another exception occur traceback most recent call last file line 1 in file home sridhar pe csv malenv3 lib python3 6 site package tensorflow lite python lite py line 348 in convert self func 0 file home sridhar pe csv malenv3 lib python3 6 site package tensorflow python framework convert to constant py line 252 in convert variable to constant v2 new output name file home sridhar pe csv malenv3 lib python3 6 site package tensorflow python eager wrap function py line 607 in function from graph def wrap import wrap function import graph def file home sridhar pe csv malenv3 lib python3 6 site package tensorflow python eager wrap function py line 585 in wrap function collection file home sridhar pe csv malenv3 lib python3 6 site package tensorflow python framework func graph py line 716 in func graph from py func func output python func func args func kwargs file home sridhar pe csv malenv3 lib python3 6 site package tensorflow python eager wrap function py line 80 in call return self call with variable creator scope self fn args kwargs file home sridhar pe csv malenv3 lib python3 6 site package tensorflow python eager wrap function py line 86 in wrap return fn args kwargs file home sridhar pe csv malenv3 lib python3 6 site package tensorflow python eager wrap function py line 605 in import graph def importer import graph def graph def name file home sridhar pe csv malenv3 lib python3 6 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file home sridhar pe csv malenv3 lib python3 6 site package tensorflow python framework importer py line 431 in import graph def raise valueerror str e valueerror input 0 of node model embed embed lookup be pass float from model embed embed lookup read readvariableop resource 0 incompatible with expect resource I m able to use the model on a python base inference engine I m try to just compress the model to deploy it on small setup and consume via a c c wrapper
tensorflowtensorflow
problem run mnist estimator in distribute mode
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 1 12 python version 3 6 5 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior run below code that I find in many page on the net I face some problem import json import os import tensorflow as tf from tensorflow contrib learn python learn dataset mnist import read data set datum dir mnist datum log dir log dist batch size 512 tf logging set verbosity tf log info def keras model lr decay return a cnn keras model input tensor tf keras layers input shape 784 name input temp tf keras layer reshape 28 28 1 name input image input tensor for I n unit in enumerate 32 64 temp tf keras layer conv2d n unit kernel size 3 stride 2 2 activation relu name cnn str i temp temp tf keras layers dropout 0 5 name dropout str i temp temp tf keras layer globalavgpool2d name average temp output tf keras layer dense 10 activation softmax name output temp model tf keras model model input input tensor output output optimizer tf keras optimizer adam lr lr decay decay model compile optimizer optimizer loss sparse categorical crossentropy metric accuracy print model summary return model def main main function datum read data set datum dir one hot false fake datum false model keras model lr 0 001 decay 0 001 config tf estimator runconfig model dir log dir save summary step 1 save checkpoint step 100 estimator tf keras estimator model to estimator model model dir log dir config config train input fn tf estimator input numpy input fn x input datum train image y datum train label num epoch none run forever batch size batch size shuffle true eval input fn tf estimator input numpy input fn x input datum test image y datum test label num epoch 1 shuffle false train spec tf estimator trainspec input fn train input fn max step 2000 eval spec tf estimator evalspec input fn eval input fn throttle sec 1 step none until the end of evaluation datum evaluate result tf estimator train and evaluate estimator train spec eval spec print evaluation result for key in evaluate result 0 key print format key evaluate result 0 key and then the rest of the code just include the tf config definition for chief worker and ps I face below issue I be able to run this code on tensorflow 1 12 but not on tensorflow 1 13 where I get the error valueerror can not squeeze dim 1 expect a dimension of 1 get 10 for metric acc remove squeezable dimension squeeze op squeeze with input shape 512 10 what be the reason I could get evaluation result print at the end of training when I be run program in non distribute mode but I get below error when it try to print the final evaluation result in distribute mode traceback most recent call last file mnist estimator py line 81 in main file mnist estimator py line 62 in main for key in evaluate result 0 key typeerror nonetype object be not subscriptable the final loss for distribute learning be high than non distribute learn for the same number of training step what can be the reason be it the nature of distribution when run in distribute mode the chief or worker be not wait for the other party to start and immediately start training when the other party join they do the task together though I think they should wait for each other to be ready as it be in my previous experience with tensorflow distribute training isn t it what I read in tensorflow related page about datum parallelism be that there be the same copy of code for different server except in assignment in tf config the chief synchronize the parameter update and parameter server keep the parameter but I don t clearly understand who split the datum between different worker be there just one copy at the chief server and it will split the datum and send batch to the worker or the worker each have a local copy of datum and do the splitting and skip some datum themselves
tensorflowtensorflow
input to eager execution function can not be keras symbolic tensor
Bug
version 2 0 0 rc python version python 3 7 this be the shape check function tf function def shape check input channel filter bottom second shortcut tf cond tf equal input channel filter lambda bottom lambda second return shortcut the error be pop here this be a class function and I try write tf cond but it get different error def basic block self bottom filter input channel tf shape bottom 1 conv self conv bn activation bottom filter 3 1 conv self conv bn activation conv filter 3 1 input channel tf shape bottom 1 shortcut shape check input channel filter bottom self conv bn activation bottom filter 1 1 return conv shortcut
tensorflowtensorflow
tf io gfile copy and tf gfile copy input and output same path with overwrite remove all content
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below 1 14 0 python version 2 7 15 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior cat label pbtxt some txt in the file tf io gfile copy label pbtxt label pbtxt overwrite true cat label pbtxt describe the expect behavior cat label pbtxt some txt in the file tf io gfile copy label pbtxt label pbtxt overwrite true cat label pbtxt some txt in the file code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf tf io gfile copy label pbtxt label pbtxt overwrite true other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
window chief can not establish session with unix worker
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 cento 7 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 1 12 2 python version 3 6 5 on window 3 6 3 on cento bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 9 0 gpu model and memory gtx 1080 ti rtx 2080 ti describe the current behavior I run a simple code to report on the system device in a 2 worker cluster when unix system be the chief task i d 0 it can communicate and establish session with window worker and display cluster device however when the window system become chief it can not establish session with unix worker hang on display createsession still wait for response from worker job worker replica 0 task 1 both system can reach each other in both case via ping code to reproduce the issue code for chief import tensorflow as tf init tf global variable initializer cluster spec tf train clusterspec worker ip address1 port1 ip address2 port2 task idx 0 server tf train server cluster spec job name worker task index task idx with tf session server target as sess sess run init print sess list device code for worker import tensorflow as tf init tf global variable initializer cluster spec tf train clusterspec worker ip address1 port1 ip address2 port2 task idx 1 server tf train server cluster spec job name worker task index task idx server join ip address1 be always the address of the chief system and ip address2 be the adddress of worker and it be swap when swap window and unix system rule
tensorflowtensorflow
send recv of collective op hang in a distribute environment
Bug
system information have I write custom code yes os platform and distribution linux cento 7 6 1810 mobile device if the issue happen on mobile device no tensorflow instal from binary tensorflow version v1 13 1 0 g6612da8951 python version 3 6 8 bazel version none gcc compiler version none cuda cudnn version none gpu model and memory none describe the current behavior the monitored session hang in there fetch the send recv tensor of collective op the same code work well for fetch all reduce tensor describe the expect behavior the send recv tensor tensor give proper answer on all worker code to reproduce the issue python illustrate send recv in collective op mp method fork fork unix spawn window num process 2 job worker def process fn host task index process fn import time import tensorflow as tf from tensorflow python op import collective op cluster spec tf train clusterspec job host host device list for task in enumerate host host device append tf devicespec job job replica 0 task task device type cpu device index 0 chief host device host device 0 unconfigure collective group leader make each worker the leader replica 0 be necessary in the configuration collective group leader chief host device to string partition device config tf configproto config experimental collective group leader collective group leader server tf train server cluster spec config config job name job task index task index run option tf runoption run option experimental collective graph key 1 with tf graph as default weight list tensor list instance key 1 for task device in enumerate host device with tf device device tf variable scope format job task weight tf get variable weight shape weight append weight send recv if task task index tensor collective op broadcast send weight weight shape weight dtype len host 1 instance key else tensor collective op broadcast recv weight shape weight dtype len host 1 instance key tensor append tensor instance key 1 allreduce if task task index tensor collective op all reduce weight len host 0 instance key add div tensor append tensor instance key 1 if task index 0 session creator tf train chiefsessioncreator master server target else session creator tf train workersessioncreator master server target with tf train monitoredsession session creator session creator as mon sess print task run format task index result weight mon sess run weight option run option print task sense format task index result weight result tensor mon sess run tensor option run option print task broadcast format task index result tensor time sleep 10 def start process start process import time import multiprocessing as mp port 60000 host fmt localhost host list for process index in range num process host append host fmt format port process index mp ctx mp get context mp method process list for process index in range num process process mp ctx process target process fn args host process index process append process process start time sleep 0 1 for process in process process join if name main start process other info log console tf 1 13 py3 huwh1 huwh1 centos worksync python tf distribute collective op sendrecv py 2019 09 25 17 46 29 612863 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 09 25 17 46 29 625630 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3408000000 hz 2019 09 25 17 46 29 625962 I tensorflow compiler xla service service cc 150 xla service 0x2c2e220 execute computation on platform host device 2019 09 25 17 46 29 625984 I tensorflow compiler xla service service cc 158 streamexecutor device 0 2019 09 25 17 46 29 627731 I tensorflow core distribute runtime rpc grpc channel cc 252 initialize grpcchannelcache for job worker 0 localhost 60000 1 localhost 60001 2019 09 25 17 46 29 628640 I tensorflow core distribute runtime rpc grpc server lib cc 391 start server with target grpc localhost 60000 warn tensorflow from home huwh1 virtualenv tf 1 13 py3 lib python3 6 site package tensorflow python framework op def library py 263 colocate with from tensorflow python framework op be deprecate and will be remove in a future version instruction for update colocation handle automatically by placer 2019 09 25 17 46 29 696555 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 09 25 17 46 29 709593 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3408000000 hz 2019 09 25 17 46 29 709906 I tensorflow compiler xla service service cc 150 xla service 0x2c2e410 execute computation on platform host device 2019 09 25 17 46 29 709932 I tensorflow compiler xla service service cc 158 streamexecutor device 0 2019 09 25 17 46 29 711328 I tensorflow core distribute runtime rpc grpc channel cc 252 initialize grpcchannelcache for job worker 0 localhost 60000 1 localhost 60001 2019 09 25 17 46 29 712186 I tensorflow core distribute runtime rpc grpc server lib cc 391 start server with target grpc localhost 60001 warn tensorflow from home huwh1 virtualenv tf 1 13 py3 lib python3 6 site package tensorflow python framework op def library py 263 colocate with from tensorflow python framework op be deprecate and will be remove in a future version instruction for update colocation handle automatically by placer 2019 09 25 17 46 29 741447 I tensorflow core distribute runtime master session cc 1192 start master session 22dc988b8d402f7f with config experimental collective group leader job worker replica 0 task 0 task 0 run task 0 sense 0 30021727 0 797495 2019 09 25 17 46 29 798887 I tensorflow core distribute runtime master session cc 1192 start master session bedba16e0ccc4c27 with config experimental collective group leader job worker replica 0 task 0 task 1 run 2019 09 25 17 46 29 820772 I tensorflow core distribute runtime base rendezvous mgr cc 159 skipping rendezvous re initialization 2019 09 25 17 46 29 820874 I tensorflow core distribute runtime base rendezvous mgr cc 159 skipping rendezvous re initialization 2019 09 25 17 46 29 821341 w tensorflow core common runtime base collective executor cc 203 basecollectiveexecutor startabort abort cleanup 70896605979375878 node worker1 collectivebcastrecv task 1 sense 0 30021727 0 797495 there be a related issue 31913 which can be solve by specify the experimental collective group leader in tf configproto collective group leader an info about rendezvous re initialization and a warning about basecollectiveexecutor be raise from the send recv nevertheless the script work fine for the comment allreduce
tensorflowtensorflow
autograph capability and limitation link in autograph doc be outdate
Bug
url s with the issue description of issue what need change autograph capability and limitation link point an out date doc at clear description there be a similar bug in in 2018 this issue be report in the closed bug issuecomment 527699503 but this comment do not get a reply thus I m file a new bug submit a pull request actually this issue be fix on aug 6th 1 5 month ago diff 039832f2dbb662a37df6e0fa64ebe35e
tensorflowtensorflow
inconsistency between keras model predict and model call
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 1 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 rc0 101 gd2d2566 2 0 0 rc1 python version python 3 7 4 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be try to extract the feature generate by the model s densefeature layer by create a new model base on the input of the original model and the output of the densefeature layer when use this submodel I notice an inconsistency between model call and model predict if we provide extra column to the model in model predict the model behave reliably by process the input use keras engine training util standardize input datum however if model call be use this doesn t happen and the model order the column base on alphabetical order as per nest flatten thus the model crash with the type cast exception if the extra column be of a different format please see the code provide to reproduce this issue I can work around this by use the predict method describe the expect behavior expect behaviour be that the two method to use a model be consistent the model call process the input use the standardize input datum method to ensure the input data be as expect code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import urllib request as request import tensorflow as tf import panda as pd def download datum download path str url header line age sex cp trestbps chol fbs restecg thalach exang oldpeak slope can thal target n download the datum and add the column header request urlretrieve url heart datum with open download path w as output output write header line with open heart datum r as input datum output writeline input datum readline def preprocess df df categorical column ensure categorical column be treat as string input col type key str for key in categorical column key df df astype col type return df def df to dataset dataframe target column target shuffle true batch size 5 dataset preparation code from the tensorflow tutorial dataframe dataframe copy label dataframe pop target column ds tf datum dataset from tensor slice dict dataframe tf one hot label depth 2 if shuffle ds ds shuffle buffer size len dataframe ds ds batch batch size return ds if name main download dataset datum path heart csv download datum data path df pd read csv data path setup feature column numeric column age chol categorical column thal df thal unique feature column input for feature name in numeric column feature column feature name tf feature column numeric column feature name input feature name tf keras input name feature name shape dtype tf float32 for feature name vocab in categorical column item vocab sort cat col tf feature column categorical column with vocabulary list feature name vocab feature column feature name tf feature column indicator column cat col input feature name tf keras input name feature name shape dtype tf string prepare input datum df preprocess df df categorical column batch size 5 a small batch sized be use for demonstration purpose train ds df to dataset df target column target batch size batch size create model input tensor feature name list feature column key feature name sort for column name in feature name feature feature column column name x tf keras layer densefeature feature name f column name feature input input tensor append x x tf keras layers concatenate input tensor x tf keras layer dense unit 24 activation relu name dense 0 x x tf keras layer dense unit 24 activation relu name dense 1 x y pre tf keras layer dense unit 2 activation softmax name output layer x model tf keras model inputs input output y pre model compile optimizer tf keras optimizer adam loss tf keras loss categoricalcrossentropy metric accuracy run eagerly true model summary model fit train ds epoch 1 create a new keras model to extract the feature the actual model be use output for column name in feature name output append model get layer f column name feature output feature extractor tf keras model model input output for I x in enumerate train ds predict work as it call keras engine training util standardize input datum internally this modify the input so that if extra column be pass they be remove and column order be change as per the model input specify out feature extractor predict x model call doesn t use the above util method and thus fail as the ordering of the input column doesn t match the input out feature extractor x print out other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file check model clone 4 py line 103 in out feature extractor x file home user anaconda3 envs rc1 lib python3 7 site package tensorflow core python keras engine base layer py line 891 in call output self call cast input args kwargs file home user anaconda3 envs rc1 lib python3 7 site package tensorflow core python keras engine network py line 707 in call convert kwargs to constant base layer util call context save file home user anaconda3 envs rc1 lib python3 7 site package tensorflow core python keras engine network py line 859 in run internal graph output tensor layer compute tensor kwargs file home user anaconda3 envs rc1 lib python3 7 site package tensorflow core python keras engine base layer py line 891 in call output self call cast input args kwargs file home user anaconda3 envs rc1 lib python3 7 site package tensorflow core python feature column dense feature py line 133 in call self state manager file home user anaconda3 envs rc1 lib python3 7 site package tensorflow core python feature column feature column v2 py line 4357 in get dense tensor return transformation cache get self state manager file home user anaconda3 envs rc1 lib python3 7 site package tensorflow core python feature column feature column v2 py line 2608 in get transform column transform feature self state manager file home user anaconda3 envs rc1 lib python3 7 site package tensorflow core python feature column feature column v2 py line 4296 in transform feature transformation cache state manager file home user anaconda3 envs rc1 lib python3 7 site package tensorflow core python feature column feature column v2 py line 3771 in get sparse tensor transformation cache get self state manager none file home user anaconda3 envs rc1 lib python3 7 site package tensorflow core python feature column feature column v2 py line 2608 in get transform column transform feature self state manager file home user anaconda3 envs rc1 lib python3 7 site package tensorflow core python feature column feature column v2 py line 3749 in transform feature return self transform input tensor input tensor state manager file home user anaconda3 envs rc1 lib python3 7 site package tensorflow core python feature column feature column v2 py line 3726 in transform input tensor prefix column name input tensor format self key file home user anaconda3 envs rc1 lib python3 7 site package tensorflow core python feature column util py line 58 in assert string or int dtype must be string or integer dtype format prefix dtype valueerror column name thal input tensor dtype must be string or integer dtype
tensorflowtensorflow
mixnet graph freeze issue
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pip tensorflow version use command below 1 14 0 python version 3 7 4 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior I have be try to covert the mixnet model to core ml use the tfcoreml converter I freeze the graph then try convert it and that s when I get throw with the follow error 1199 op in the final graph load the tf graph graph load now find op in the tf graph that can be drop for inference collect all the const op from the graph by run it outofrangeerror traceback most recent call last anaconda3 envs tf lib python3 7 site package tensorflow python client session py in do call self fn args 1355 try 1356 return fn args 1357 except error operror as e anaconda3 envs tf lib python3 7 site package tensorflow python client session py in run fn feed dict fetch list target list option run metadata 1340 return self call tf sessionrun 1341 option feed dict fetch list target list run metadata 1342 anaconda3 envs tf lib python3 7 site package tensorflow python client session py in call tf sessionrun self option feed dict fetch list target list run metadata 1428 self session option feed dict fetch list target list 1429 run metadata 1430 outofrangeerror node mixnet s mixnet model stem batch normalization fusedbatchnormv3 type add num of output 1 do not have output 5 during handling of the above exception another exception occur outofrangeerror traceback most recent call last in 48 mlmodel path ml model 49 output feature name logit 50 input name shape dict anaconda3 envs tf lib python3 7 site package tfcoreml tf coreml converter py in convert tf model path mlmodel path output feature name input name shape dict image input name be bgr red bias green bias blue bias gray bias image scale class label predict feature name predict probability output add custom layer custom conversion function use coreml 3 619 predict probability output predict probability output 620 add custom layer add custom layer 621 custom conversion function custom conversion function anaconda3 envs tf lib python3 7 site package tfcoreml tf coreml converter py in convert pb to mlmodel tf model path mlmodel path output feature name input name shape dict image input name be bgr red bias green bias blue bias gray bias image scale class label predict feature name predict probability output add custom layer custom conversion function 260 else 261 const tensor name 262 tensor evaluate sess run tensor feed dict input feed dict 263 for I in range len tensor name 264 if tensor name I not in shape dict anaconda3 envs tf lib python3 7 site package tensorflow python client session py in run self fetch feed dict option run metadata 948 try 949 result self run none fetch feed dict option ptr 950 run metadata ptr 951 if run metadata 952 proto datum tf session tf getbuffer run metadata ptr anaconda3 envs tf lib python3 7 site package tensorflow python client session py in run self handle fetch feed dict option run metadata 1171 if final fetch or final target or handle and feed dict tensor 1172 result self do run handle final target final fetch 1173 feed dict tensor option run metadata 1174 else 1175 result anaconda3 envs tf lib python3 7 site package tensorflow python client session py in do run self handle target list fetch list feed dict option run metadata 1348 if handle be none 1349 return self do call run fn feed fetch target option 1350 run metadata 1351 else 1352 return self do call prun fn handle feed fetch anaconda3 envs tf lib python3 7 site package tensorflow python client session py in do call self fn args 1368 pass 1369 message error interpolation interpolate message self graph 1370 raise type e node def op message 1371 1372 def extend graph self outofrangeerror node mixnet s mixnet model stem batch normalization fusedbatchnormv3 type add num of output 1 do not have output 5 I presume this be an issue with the way I be freeze the graph the checkpoint be take from the official mixnet implementation I be relatively new to tf and be still learn please help I resolve this issue and let I know in case further information be need suggestion for convert to core ml be also welcome thank in advance code to reproduce the issue you might have to install tf 1 14 0 python import tensorflow as tf import tfcoreml as tf converter from tensorflow compat v1 import graph util tf log set verbosity tf log error download the checkpoint model name mixnet s wget model name tar gz o model name tar gz tar zxvf model name tar gz ckpt dir model name def freeze graph model folder retrieve the checkpoint fullpath checkpoint tf train get checkpoint state model folder input checkpoint checkpoint model checkpoint path file fullname of the freezed graph absolute model folder join input checkpoint split 1 output graph absolute model folder model pb before export the graph get the output node output node name logit clear device to allow tf to control on which device it will load operation clear device true import the meta graph and retrieve a saver saver tf train import meta graph input checkpoint meta clear device clear device retrieve the protobuf graph definition graph tf get default graph input graph def graph as graph def start a session and restore the graph weight with tf session as sess saver restore sess input checkpoint use a build in tf helper to export variable to constant output graph def graph util convert variable to constant sess input graph def output node name split serialize and dump the output graph to the filesystem with tf gfile gfile output graph wb as f f write output graph def serializetostring print d op in the final graph len output graph def node freeze graph ckpt dir ml model mixnet s mixnet core mlmodel frozen model mixnet s model pb tf converter convert tf model path frozen model mlmodel path ml model output feature name logit input name shape dict
tensorflowtensorflow
keras layer compute output shape call build with wrong input shape
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 0 0rc2 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when use layer compute output shape or layer compute output signature on a layer with a build function the build input shape argument be always set to none describe the expect behavior the input shape should be set to the shape pass to compute output shape code to reproduce the issue python import tensorflow as tf shape 1 2 class mylayer tf keras layers layer def build self input shape print input shape assert input shape shape def call self input return input layer mylayer layer compute output shape shape
tensorflowtensorflow
convert tf fashion mnist model with support operation to tflite break due to operand shape mismatch
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac 10 12 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary pip tensorflow version use command below 2 0 0 rc1 python version 3 7 1 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when convert model to tflite a support operation be attempt to be fuse and a dimension error occur describe the expect behavior when convert model to tflite the support operation fuse correctly code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow compat v1 as tf import numpy as np tf disable v2 behavior fashion mnist tf keras datasets fashion mnist train image train label test image test label fashion mnist load datum class name t shirt top trouser pullover dress coat sandal shirt sneaker bag ankle boot train image train image 255 0 test image test image 255 0 input tf placeholder dtype tf float32 shape none 28 28 name input reshape tf reshape input tf shape input 0 784 w1 tf variable tf random normal 128 784 dtype tf float32 name w1 b1 tf variable tf random normal 128 dtype tf float32 name b1 layer one unbiased tf matmul w1 tf transpose reshape print layer one unbiased b1 layer one bias tf add tf transpose layer one unbiased b1 print layer one bias activate layer one tf nn relu layer one bias w2 tf variable tf random normal 10 128 name w2 b2 tf variable tf random normal 10 name b2 print activate layer one layer two unbiased tf matmul w2 tf transpose activate layer one print tf transpose layer two unbiased b2 print layer two unbiased b2 layer two bias tf add tf transpose layer two unbiased b2 print layer two bias prediction tf nn softmax layer two bias name final label tf placeholder dtype tf int32 shape none loss tf nn sparse softmax cross entropy with logit label label logit layer two bias optimizer tf train adamoptimizer train optimizer minimize loss label acc tf one hot label 10 accuracy 1 tf norm tf subtract prediction label acc axis 1 2 accuracy average tf math reduce mean accuracy sess tf session sess run tf global variable initializer feed dict input train image label train label sess run train feed dict feed dict feed dict2 input test image label test label print test accuracy sess run accuracy average feed dict feed dict2 print training accuracy sess run accuracy average feed dict feed dict saver tf train saver save path saver save sess model ckpt tf io write graph sess graph train pbtxt converter tf lite tfliteconverter from session sess input prediction converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op tflite model converter convert open convert model tflite wb write tflite model sess close above code be the code use to generate the model freeze graph py be then run on the output of the above code the below code be use to convert import tensorflow compat v1 as tf import numpy as np tf disable v2 behavior graph def file freeze graph pbtxt input array input output array final converter tf lite tfliteconverter from frozen graph graph def file input array output array tflite model converter convert open convert model tflite wb write tflite model other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach warn log before flag parsing go to stderr w0924 12 49 42 307250 140736348050368 deprecation py 323 from user t cape miniconda3 lib python3 7 site package tensorflow core python compat v2 compat py 65 disable resource variable from tensorflow python op variable scope be deprecate and will be remove in a future version instruction for update non resource variable be not support in the long term 2019 09 24 12 49 42 308630 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 09 24 12 49 42 324658 I tensorflow compiler xla service service cc 168 xla service 0x7fb5c10b8740 execute computation on platform host device 2019 09 24 12 49 42 324684 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 09 24 12 49 42 336278 I tensorflow core grappler device cc 60 number of eligible gpu core count 8 compute capability 0 0 0 note tensorflow be not compile with cuda support 2019 09 24 12 49 42 336378 I tensorflow core grappler cluster single machine cc 356 start new session 2019 09 24 12 49 42 344845 I tensorflow core grappler optimizer meta optimizer cc 716 optimization result for grappler item graph to optimize 2019 09 24 12 49 42 344864 I tensorflow core grappler optimizer meta optimizer cc 718 constant folding graph size after 27 node 4 27 edge 4 time 3 984ms 2019 09 24 12 49 42 344871 I tensorflow core grappler optimizer meta optimizer cc 718 constant folding graph size after 27 node 0 27 edge 0 time 0 999ms traceback most recent call last file conversion py line 11 in tflite model converter convert file user t cape miniconda3 lib python3 7 site package tensorflow core lite python lite py line 983 in convert converter kwargs file user t cape miniconda3 lib python3 7 site package tensorflow core lite python convert py line 449 in toco convert impl enable mlir converter enable mlir converter file user t cape miniconda3 lib python3 7 site package tensorflow core lite python convert py line 200 in toco convert protos raise convertererror see console for info n s n s n stdout stderr tensorflow lite python convert convertererror see console for info 2019 09 24 12 49 44 232035 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 14 operator 27 array 0 quantize 2019 09 24 12 49 44 232297 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 14 operator 27 array 0 quantize 2019 09 24 12 49 44 232628 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 13 operator 26 array 0 quantize 2019 09 24 12 49 44 232720 f tensorflow lite toco graph transformation fuse binary into precede affine cc 62 operand shape mismatch fatal python error abort current thread 0x00007fffbc0853c0 most recent call first file user t cape miniconda3 lib python3 7 site package tensorflow core lite toco python toco from protos py line 52 in execute file user t cape miniconda3 lib python3 7 site package absl app py line 251 in run main file user t cape miniconda3 lib python3 7 site package absl app py line 300 in run file user t cape miniconda3 lib python3 7 site package tensorflow core python platform app py line 40 in run file user t cape miniconda3 lib python3 7 site package tensorflow core lite toco python toco from protos py line 89 in main file user t cape miniconda3 bin toco from protos line 10 in
tensorflowtensorflow
tensorflow be break unusable on raspberry pi
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes code be paste below os platform and distribution e g linux ubuntu 16 04 raspbian buster mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device raspberry pi 4 with 4 gb ram tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 3 7 3 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a on window I use python 3 7 4 with tensorflow 1 13 1 1 14 0 and 2 0 0rc1 describe the current behavior I train a model on windows gen test train datum py then train model py use the same tf version that s available on rpi I can test the model on window just fine use model visual test py if I transfer the model to the rpi I can not load it I get mysterious error when run model visual test py traceback most recent call last file model visual test py line 16 in model keras model load model save model h5 file home pi local lib python3 7 site package tensorflow core python keras save save py line 140 in load model loader impl parse save model filepath file home pi local lib python3 7 site package tensorflow core python save model loader impl py line 83 in parse save model constant save model filename pb oserror savedmodel file do not exist at save model h5 save model pbtxt save model pb it doesn t work even if I train the model on the rpi by run gen test train datum py then run train model py which take forever it complete training but then it fail to load with train on 8000 sample 8000 8000 1305s 163ms sample loss 0 5675 acc 0 8609 traceback most recent call last file train model py line 122 in keras model save model model save model h5 file home pi local lib python3 7 site package tensorflow core python keras save save py line 104 in save model model filepath overwrite include optimizer file home pi local lib python3 7 site package tensorflow core python keras save hdf5 format py line 73 in save model to hdf5 raise importerror save model require h5py importerror save model require h5py no that s wrong I do have the h5 module instal on the raspberry pi pip3 list user grep h5 h5py 2 10 0 in other word a model train on the rpi fail to load on the rpi a model train on win10 run fine on win10 but fail to load on the rpi this be with a code that work perfect on window 10 and macos with all kind of tf version between 1 13 1 and 2 0 0rc1 gpu or cpu version describe the expect behavior I mean it should just work shouldn t it why be this so hard code to reproduce the issue run gen test train datum py then on that run train model py it fail to run its own model generate on rpi or run gen test train datum py on window run train model py on window transfer the model on the rpi then run model visual test py on it it will fail to load I will leave this branch untouched so you can test the bug report other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
asset file not export in the savedmodel when use tf lookup staticvocabularytable
Bug
system information have I write custom code yes os platform and distribution ubuntu 16 04 tensorflow instal from binary tensorflow version 2 0 0rc2 python version 3 6 6 describe the current behavior when use tf lookup staticvocabularytable the asset file be not export in the savedmodel asset directory however it be correctly save when use tf lookup statichashtable describe the expect behavior the vocabulary file should be save in the asset directory of the savedmodel code to reproduce the issue python import os import shutil import tensorflow as tf class model tf keras layers layer def init self vocabulary path super model self init initializer tf lookup textfileinitializer vocabulary path tf string tf lookup textfileindex whole line tf int64 tf lookup textfileindex line number self table tf lookup staticvocabularytable initializer num oov bucket 1 self table tf lookup statichashtable initializer 0 def call self token return self table lookup token tf function input signature tf tensorspec none dtype tf string def serve self token return self token vocabulary path tmp vocab txt with open vocabulary path w as vocabulary file vocabulary file write a nb nc n model model vocabulary path export dir tmp model if os path exist export dir shutil rmtree export dir tf save model save model export dir signature model serve asset os listdir os path join export dir asset assert len asset 1 other info log the code above raise an assertionerror as the asset directory be empty
tensorflowtensorflow
valueerror when pass tensor to keras subclass model call
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device not test tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 rc0 rc1 rc2 python version 3 7 4 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version 10 0 7 6 3 gpu model and memory quadro k620 describe the current behavior use conditional statement on tensor pass to keras subclasse model without tf function decorator lead to valueerror exception valueerror incompatible shape between op input and calculate input gradient forward operation identity input index 0 original input shape 64 1024 calculate input gradient shape 32 1024 neither tf cond or python s if statement work without decorator the model work fine if the call function be decorate with tf function decorator pass python boolean value instead of tf constant true work fine with and without tf function decorator edit eager mode work fine without any tf function describe the expect behavior the model should work without mandate tf function decorator see the code for detail code to reproduce the issue code that reproduce the problem colab link other info log valueerror in convert code 12 train step gradient mdan tape gradient model loss model trainable variable usr local lib python3 6 dist package tensorflow core python eager backprop py 1014 gradient unconnected gradient unconnected gradient usr local lib python3 6 dist package tensorflow core python eager imperative grad py 76 imperative grad compat as str unconnected gradient value usr local lib python3 6 dist package tensorflow core python eager backprop py 138 gradient function return grad fn mock op out grad usr local lib python3 6 dist package tensorflow core python op cond v2 py 120 ifgrad true graph grad util unique grad fn name true graph name usr local lib python3 6 dist package tensorflow core python op cond v2 py 395 create grad func func graph condgradfuncgraph name func graph usr local lib python3 6 dist package tensorflow core python framework func graph py 915 func graph from py func func output python func func args func kwargs usr local lib python3 6 dist package tensorflow core python op cond v2 py 394 lambda grad fn func graph grad usr local lib python3 6 dist package tensorflow core python op cond v2 py 373 grad fn src graph func graph usr local lib python3 6 dist package tensorflow core python op gradient util py 714 gradientshelper op name I t in shape in grad shape valueerror incompatible shape between op input and calculate input gradient forward operation identity input index 0 original input shape 64 1024 calculate input gradient shape 32 1024
tensorflowtensorflow
bincount op test test negative fail with tf 2 0
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 rc1 python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 0 gpu model and memory rtx 2080 ti 11 g describe the current behavior the test negative test in tensorflow python kernel test bincount op test py fail as the bincount call with negative value do not throw an invalidargumenterror this behavior might be the result of the op be call on the gpu as only the cpu call be expect to throw the error as per the comment here l107 set cuda visible device to be empty force the op to run on cpu and the test pass the invalid argument error be successfully throw but pass use gpu false as an option to the session wrapper do not have this effect describe the expect behavior the test negative test should pass as the call to bincount with a negative input value be expect to throw an invalidargumenterror code to reproduce the issue run the python test tensorflow python kernel test bincount op test py